id
stringlengths 10
10
| title
stringlengths 3
179
| track
stringclasses 1
value | status
stringclasses 3
values | keywords
stringlengths 2
2.39k
| primary_area
stringclasses 21
values | author
stringclasses 501
values | authorids
stringclasses 501
values | aff
stringclasses 1
value | aff_domain
stringclasses 1
value | position
stringclasses 1
value | rating
stringclasses 355
values | confidence
stringlengths 0
19
| soundness
stringclasses 642
values | contribution
stringclasses 596
values | presentation
stringclasses 782
values | rating_avg
float64 0
9
| confidence_avg
float64 0
5
| soundness_avg
float64 0
4
| contribution_avg
float64 0
4
| presentation_avg
float64 0
4
| corr_rating_confidence
float64 -1
1
| project
stringclasses 1
value | github
stringclasses 1
value | Review
listlengths 2
10
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3iJ7eSj2rE | Synergistic Weak-Strong Collaboration by Aligning Preferences | main | Active | Weak-Strong Model Collaboration;Preferences Tuning;Large Language Model | applications to computer vision, audio, language, and other modalities | 3;3;5;5 | 4;4;4;4 | 1;2;3;3 | 1;2;2;3 | 2;3;2;2 | 4 | 4 | 2.25 | 2 | 2.25 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the Weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This paper is well-organized and easy to read. \n2. The proposed method presents a reasonable approach to improve the reasoning performance of LLMs by combining weak and strong LLMs. \n3. The approach is practical and has the potential for broad application.\n4. The experimental results reveal that the proposed method significantly enhances performance on various reasoning tasks compared to both the weak and strong LLMs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a collaborative framework that integrates a specialized weak model with a general strong model to enhance the reasoning performance of LLMs. In this framework, the weak model generates detailed initial drafts and background information tailored to specific domains, while the strong model refines and enhances these drafts utilizing its advanced reasoning capabilities. A feedback loop is implemented to fine-tune the weak model based on the preferences of the strong model, fostering an adaptive and synergistic relationship. Experimental results indicate that the proposed method outperforms both the basic weak and strong LLMs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The technical innovations introduced in this paper appear to be somewhat limited, as the concept of leveraging both weak and strong LLMs has been extensively explored in prior research, including works such as “Your Weak LLM is Secretly a Strong Teacher for Alignment” and “Synthesizing Text-to-SQL Data from Weak and Strong LLMs.”\n2. A more comprehensive evaluation would enhance the study by comparing the proposed method against a more comprehensive array of advanced baseline models. Currently, the comparisons are limited to several basic baselines. Incorporating more sophisticated weak-strong collaboration methods and state-of-the-art techniques would provide stronger validation of the proposed method's effectiveness.\n3. To demonstrate the versatility of the proposed method, it would be advantageous to conduct experiments using different open-source LLMs of varying sizes."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Why does the main experiment use the strong model GPT-3.5-Turbo for the ethical dataset, instead of maintaining consistency with other domains by using GPT-4?\n2. Why was the learning rate set to 1.41e-5? Intuitively, this seems like an uncommon number, was it determined by searching different learning rates?\n3. Typo: There is inconsistent formatting of the name 'Llama-3' throughout the paper. For example, it is written as \"LLama-3-8B\" in Table 1, \"LLaMA3-8B\" on line 481, and \"Llama3-8B\" on line 381.\n4. In the main experiment, were the results for Llama-3-8B obtained using a few-shot setting? The IfQA paper used two evaluation methods: a supervised setting and a few-shot setting. If the few-shot setting was not used, intuitively, the output form of the model might not be controllable. Similarly, when using Llama-3-70B and Llama-2-70B as strong models for evaluation, were few-shot settings adopted?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The research topic regarding the collaborative interaction between a specialized weak model and a general strong model is very important"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposed a weak-strong collaboration mode, in which a weak model fine-tuned on domain-specific datasets first generates drafts, while a strong model refines them. By utilizing feedback from the strong model to perform preference optimization, the performance of the weak model is further improved."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Lack of novelty: The concept of weak-strong collaboration explored in the paper, essentially using feedback to correct large language models, is not a novel idea and has already been extensively researched [1]. The two collaboration strategies: standard refinement bears strong resemblances to prior works [2], and preference enhancement that leverages DPO for inconsistency alignment is also not new. It’s just old wine in a new bottle, wrapping up a story of the interaction between a specialized weak model and a general strong model.\n2. The datasets used in the experiments lack representativeness: (1) Domain selection: In addition to the three domains selected, more typical mathematical reasoning datasets should be included, such as GSM8k and MATH, which have been widely used in previous model collaboration work [3][4]. (2) Dataset selection: For the medical domain, the choice of MedMCQA, which is limited to a multiple-choice format, is too narrow. There should be more focus on broader and more practical long-form QA datasets like K-QA [5].\n3. Lack of baselines for model collaboration/ensemble: The main experiment mainly compares the proposed collaboration approach with only weak or strong model strategies, omitting critical baseline comparisons, such as self-refine [6], and other ensemble strategies such as multi-model debate [7], self-consistency.\n4. Some specific experimental settings were not clearly stated, for example, the retrieval knowledge base used by FLARE in three selected domains was not mentioned\n5. The Preference Enhancement Interaction lacks generalizability, as the acquisition of preference pairs is specific to a strong model. This specificity might limit the effectiveness and generalization when collaborating with different strong models.\n6. Questioning the experimental results: The results presented in Table 1 raise concerns about the necessity of weak-strong collaboration. In the Counterfactual and Medicine domains, weak models without SFT are much stronger than strong models, e.g., Llama-3-8b (68.57) vs. GPT-3.5-turbo (22.62). Similarly, in the Ethics domain, the performances were comparable. If weak models can perform on par with or better than strong models, is the use of weak-strong collaboration justified? Does the motivation for using a stronger model to assist weaker ones still stand?\n7. Concerns about the high costs for strong models compared to minor performance improvements in weak models: The proposed collaborative approach, compared to merely using a weak model for SFT, only brought minor improvements (shown in Table 1). However, this process requires the strong model to refine and evaluate the output of the weak model, which brings significant API costs.\n8. Lack of in-depth analysis of the improvements brought by the cooperation strategy, for example, the paper does not specify in which aspects the strong model has improved the weak model, nor does it detail the types and percentages of errors detected in the weak model by the strong model. Furthermore, the frequency with which the weak model adopts feedback from the strong model is not discussed. More comprehensive case studies are needed to understand these dynamics fully, rather than merely providing a superficial overview.\n\n[1] Automatically Correcting Large Language Models: Surveying the Landscape of Diverse Automated Correction Strategies. Pan et al. TACL 2024\n\n[2] Small Models are Valuable Plug-ins for Large Language Models. Xu et al. ACL 2024 Findings\n\n[3] Learning to Decode Collaboratively with Multiple Language Models. Shen et al. ACL 2024\n\n[4] Ensemble learning for heterogeneous large language models with deep parallel collaboration. Huang et al. NeurIPS 2024\n\n[5] K-QA: A Real-World Medical Q&A Benchmark. Manes et al. BioNLP 2024\n\n[6] Self-Refine: Iterative Refinement with Self-Feedback. Madaan et al. NeurIPS 2023\n\n[7] Improving Factuality and Reasoning in Language Models through Multiagent Debate. Du et al. arXiv 2023"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "a) Do the authors have a vision on how the proposed CoWEST is different from the LLM cascade methods such as CAT[1]?\n\nb) How the sampling will impact on the performance?\n\nc) How's the evaluator's quality? Have the author consider using logits or a trainable method (e.g., a MLP) to serve as the evaluator? Since self-critique sometimes may results LLM is always more confident with the content generated by itself, while logits or trainable methods can be more fair.\n\nd) In 127-129, using \\citep()\n\ne) In line 207, \"referred to as\"? there are more, please check the writing for readibility.\n\n**Reference**\n\n[1] Cascade-Aware Training of Language Models."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "a) The proposed CoWEST shows remarkable improvements over SOTA methods such as RAG-based methods.\n\nb) The interaction design between the weak LLM and the strong LLM is interesting compared to existing methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work focuses on the challenge that current large language models (LLMs) often struggle with specific domains or downstream tasks. To tackle this, we propose a collaborative framework, CoWEST, which integrates a weak LLM with a strong LLM. In CoWEST, the weak LLM is first fine-tuned for a specific domain or task, and then the strong LLM’s general capabilities are leveraged to enhance the fine-tuned weak LLM’s output. Additionally, a preference tuning paradigm is used to evaluate the collaborative output against that of independent models. Extensive experiments demonstrate the effectiveness of the proposed CoWEST framework."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "a) The sampling method for preference tuning is not clear, lack the the sampling statistics (e.g., sample distribution, average sample size etc.).\n\nb) The evaluator is like a self-critique and more evaluator quality details such as score criteria, comparisons with human evaluation etc. should be included.\n\nc) Minor writing issues."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. The LLM abbreviation of L121 is repeatedly defined.\n2. The reference form of L127-L128.\n3. What if the weak model and the strong model are the same?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Considering the challenges of real-world scenarios, the issues that this paper focuses on are necessary. Strong-weak model collaboration is one of the promising directions.\n2. \"Using weak models for domain adaptation and then strong models for reasoning\" can be seen as a RAG method in which the weak model after domain adaptation generates evidence context, and then the strong model uses the evidence in this domain for reasoning. This may actually increase the amount of information for strong model reasoning."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose a paradigm of weak-strong model cooperation, in which the large model with stronger reasoning ability is responsible for reasoning with the background knowledge and drafts generated by the small model. Furthermore, the authors propose to fine-tune the weak model to adapt it to the preferences of the strong model to achieve so-called adaptive cooperation. The proposed method achieves the improvement in the F1 performance in three datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The authors explore the framework of weak-strong model cooperation, but I think it still needs to be better explained, that is, how the proposed feedback loop and interaction strategy go beyond the static cooperation method. I think the claims of L111-L115 are a bit far-fetched (considering that the weak model still reasons first during reasoning, and then the strong model uses the output of the weak model for reasoning). In addition, the writing needs to be improved, there are many small errors, and some claims are confusing to readers.\n2. The paper focuses on improvements such as performance scores (F1), but lacks qualitative analysis of how the models collaborate in real-world scenarios. In fact, I am still confused about the example in Figure 6, how to show the role of the strong model? There is also limited information about how the feedback loop between weak and strong models affects the interpretability or usability of the output in complex reasoning tasks, but it is one of the important contributions emphasized by the authors. I suggest that the authors add some qualitative examples that can show how collaboration improves responses (in terms of factual accuracy, reasoning chain, or coherence).\n3. The paper acknowledges the computational cost of fine-tuning large models, but the authors do not provide much insight into the scalability of COWEST when it is extended to larger weak models or more complex tasks, such as multi-hop questions that exploit the strong reasoning capabilities of large models. In addition, the resource impact of the feedback loop (e.g., computational overhead) is not discussed in depth, where the two inferences in the Inference stage increase the computational cost.\n4. The authors should conduct comparative experiments on transferring domain knowledge to strong models in the case of longer contexts."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose a synergistic collaboration framework where a smaller, specialized model and a larger, general-purpose model work together, using preference finetuning to enhance problem-solving in specialized tasks."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024synergistic,\ntitle={Synergistic Weak-Strong Collaboration by Aligning Preferences},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3iJ7eSj2rE},\nnote={under review}\n}"
},
"abstract": {
"value": "Current Large Language Models (LLMs) demonstrate exceptional general reasoning and problem-solving abilities but often struggle with specialized tasks or domains requiring proprietary information due to their generalized training and size constraints. Fine-tuning large models for every specific domain is impractical because of inaccessibility to black-box model parameters and high computational costs. We explore a solution to this challenge: can a collaborative framework between a specialized weak model and a general strong model effectively extend LLMs' capabilities to niche but critical tasks? We propose a dynamic interaction where the weak model, tailored to specific domains, generates detailed initial drafts and background information, while the strong model refines and enhances these drafts using its advanced reasoning skills. To optimize this collaboration, we introduce a feedback loop by fine-tuning the weak model based on the strong model's preferences, fostering an adaptive and synergistic relationship. We validate our framework through experiments on three datasets. We find that the collaboration significantly outperforms each model alone by leveraging complementary strengths. Moreover, fine-tuning the weak model with strong model's preference further enhances overall performance.\nOur collaborative approach achieves an average F1 score improvement of 3.24\\% over the weak model alone and 12.17\\% over the strong model alone across all benchmarks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Weak-Strong Model Collaboration",
"Preferences Tuning",
"Large Language Model"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/84abf7a1736735061ba70240d2871355b6ca1421.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Synergistic Weak-Strong Collaboration by Aligning Preferences"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3j72egd8q1 | Custom Gradient Estimators are Straight-Through Estimators in Disguise | main | Active | quantization;deep learning;optimization | optimization | 5;5;5;6 | 3;4;3;3 | 3;2;2;3 | 2;2;3;3 | 3;2;2;3 | 5.25 | 3.25 | 2.5 | 2.5 | 2.5 | -0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Could you provide unadjusted(A) in Table 4?\n2. Besides average error, could you draw a histogram which can better validate the claims?\n3. How do you empirically decide when the weight difference is going to affect the conclusion? E.g., Adam shows larger weight difference, while ImageNet shows larger weight difference. It does not seem clear to me they all supports the claim that STE works the same as other gradient estimators on various settings."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The claim is strong that other gradient estimators works similar as STE in QAT.\n2. Experiments show that the weight difference is small to support the claim."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors theoretically analyze the weight difference in QAT when trained with different gradient estimators. Under certain conditions, the authors show that the weight difference is small which means that there's no need to try another gradient estimator other than STE. Empirical results show that the weight difference is small when adopting the proposed weight initialization."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The mirror room story does not appear closely connected to the theoretical analysis.\n2. Assumption 5.1.1 violates Figure 1 where the gradient could be zero.\n3. From Table 4, Adam leads to larger weight difference.\n4. For more complicated task like ImageNet, the weight difference is much larger than MNIST."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. In Eq. (5), why the $E^{(t)}$ is defined differently for SGD and Adam?\n\n2. Line 200, \"Most multi-bit gradient estimators proposed in the literature are cyclical.\" Can you specify exactly which estimators are cyclical?\n\n3. Typos in Eq. (9)-(12), $f_{STE}^{(t)}$ should be $\\nabla f_{STE}^{(t)}$.\n\n4. In assumption 5.1.1, should $| \\hat{Q}^\\prime (w) |$ be $\\hat{Q}^\\prime (w)$ without the absolute value?\n\n5. Does Table 4 suggest that more than 95% quantized weights from $\\hat{Q}$-net and STE-net are *identical*? This finding seems counterintuitive."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper is overall well presented.\n2. The concept of mirror effect is interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies behavior of gradient estimators, including straight through\nestimator (STE), for weight quantization. It is shown that a large class of\nweight gradient estimators is approximately equivalent to the STE during training using SGD and Adam."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "A primary concern is that the key claims and several major concepts lack mathematical rigor. Additionally, the main theoretical results provided are too limited to substantiate the claims:\n\n1. Contribution 1 states that '... all nonzero weight gradient estimators lead to approximately equivalent weight movement for non-adaptive learning rate optimizers ...'. However, the term 'approximately equivalent weight movement' lacks a precise mathematical definition. It would be helpful to formalize this concept, perhaps by specifying the conditions under which these movements are considered 'approximately equivalent.' \n\n2. According to Section 6.2, 'approximately equivalent weight movement' appears to refer to high 'Quantized Weight Agreement' or a small 'Normalized Weight Alignment Error ($\\bar{E}$)'. Again, these metrics require explicit mathematical expressions for each. Additionally, this interpretation is not fully supported by the main theoretical results (e.g., Theorem 5.1), which only derive the increment in weight alignment error between two consecutive iterations, rather than a direct measure of agreement or alignment over the entire optimization trajectory. \n\n3. For the error bounds in Eq. (6) and (7), there is insufficient justification for why the gradient error terms should be small, nor any clear indication of how small these terms are. It is insufficient to merely claim that a term is 'small' and then disregard it. These errors could accumulate significantly over iterations, potentially undermining the main conclusions.\n\n4. The use of mathematical notation is poor, which possibly lead to incorrect derivations. For example:\n - The Euclidean norm should be denoted by $\\|| \\cdot \\||$ rather than $\\| \\cdot \\|$ as in Eq. (5)-(12) and other instances.\n - It should be explicitly stated that $Q$ and $M$ are applied *element-wise* to the weight vector. Additionally, it would be preferable to use bold letters to represent vectors and to distinguish them from scalars.\n - In Eq. (10), you have three vectors, $\\nabla f_{Q}^{(t)}$, $\\hat{Q}^\\prime$, $M^\\prime$, how are they multiplied together? The manner in which they are multiplied together is unclear. Furthermore, the residual term in Eq. (10) should not be a scalar $O(\\eta^2)$, but rather a vector. I believe that the second-order error term also depends on the *model size*, i.e., the dimension of $w$.\n\n5. The experiments is only conducted for one instance of $\\hat{Q}$ (HTGE Pei et al.), which is insufficient. Additionally, the expression of $\\hat{Q}$ from HTGE is missing.\n\nMinor comments:\n\n1. A key reference on the theoretical analysis of STE is missing, specifically: *Yin et al., Understanding Straight-Through Estimator in Training Activation Quantized Neural Networks, ICLR2019.*"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weaknesses"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The theoretical insights are interesting, unexpected and (to my knowledge) novel. They offer better understanding and insight into how gradient estimators work, which appeals to me.\n- The paper is generally well written and easy to read. I appreciate how the authors lead their result with an intuitive explanation and illustrative graphic. This makes the following theory much easier to intuit.\n- The experiments shown in the paper provide good evidence for the theoretical results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a theoretical analysis of gradient estimators used in quantization-aware training. The authors show that in the case of quantized weights (but full precision activations) many extant gradient estimators for the quantization operation are approximately equivalent, if the learning rate and weight initialization are adjusted, and the learning rate is small. They then verify empirically their theoretical results, on image classification benchmarks, demonstrating that models trained with different gradient estimators indeed show high weight agreement and similar accuracies."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Major\n\n1. The claims relating to practical impact feel overstated (\"practitioners can now confidently choose the STE\"). The problem setting that the authors explore (full precision activations, quantized weights, uniform fixed point quantization, small learning rate) is rather specific, and practitioners may be interested in quantized activations, or low-precision floating point or larger learning rates etc. I would prefer if the authors tempered their claims.\n2. The experiments, although they demonstrate the theory well, are limited. They do not show finetuning from a pretrained full precision checkpoint, as is common for QAT in practice (I would expect this setting to match well with the theory since 1. QAT finetuning is done with a lower learning rate typically and 2. the gradient norm is likely to be low after initializing from a pretrained model). They do not show results other than 2-bit weight quantization even though they say results are similar. They do not show the practical limits of their theory, e.g. how weight alignment degrades/evolves over training or how much the learning rate needs to be increased for the error terms to start having a large impact.\n\nMinor\n1. Presentation could be improved in a number of ways. \n 1. Use of booktabs for tables. Place all table captions above the tables.\n 1. Tables are hard to parse when skimming -- would benefit from more descriptive captions/grouping table 3 with 4\n 2. All quotation marks are incorrectly rendered by LaTeX.\n 2. Fig. 3 would look better with the bins.\n2. The choice of training recipes are not explained -- it is unclear why the first 10 epochs are done using the same gradient estimator. Is it because the gradient norm is too high at the start of training resulting in the weights quickly diverging?\n3. I think the point made in line 305 should be made more prominent. I think it is quite important that the reader is made aware that the gradient error is small/zero since Q-net and STE net will quantize to similar weights."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Lines 400-404: Why was the initial 10% of training for the ImageNet-ResNet setup kept identical? What would the results be without this measure?\n\nLines 527-528: Regarding the statement that high learning rates might be the reason the equivalence is not observed in other studies, the authors write, \"we expect that this counter-argument will not stand the test of time, since by our main results, the higher learning rate masks the fact that models with novel $\\hat{Q}$ and the STE are still approximating the same process.\" Could you elaborate on what you mean by \"will not stand the test of time\" given that equations (6) and (7) indicate learning-rate-squared errors? It seems that higher learning rates would increase these errors. Why should these differences not enable advantages for novel $\\hat{Q}$?\n\n\nLines 530-539: Can the authors comment on the potential implications of their results for the studies mentioned in the last paragraph, which propose additional innovations alongside novel gradient estimators?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Equations (6) and (7) provide a rigorous theoretical contribution regarding the small difference in weight movement for different gradient estimators.\n\nThe presentation and writing style is very clear with helpful intuition such as the analogy of the \"funhouse mirror\". Additionally, the learning rate tweak in the experiment provides a practical comparison for the magnitude of the differences."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors examine how different gradient estimators affect quantization-aware training. They theoretically demonstrate that, in the limit of small learning rates and with minor adjustments to the initialization and the learning rate magnitude, most gradient estimators yield equivalent weight movements. Consequently, they suggest that the Straight-Through Estimator, treating the gradient as if no quantization occurred in the backward pass, performs as well as any other more sophisticated alternative. Their theoretical claims are complemented by an empirical investigation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The authors themselves acknowledge in Section 8 that many publications introduce more than just a new gradient estimator, raising questions about the broader practical impact of their findings. Novel gradient estimators are often proposed with other techniques to enable any benefits. As a result, the findings may have limited applicability beyond specific configurations.\n\nIt is somewhat difficult to draw clear conclusions about practical applications from the experiments. While the learning rate tweak provides a useful comparison, the differences in Tables 4 and 5 are challenging to interpret, particularly without comparisons across different initializations. A convincing experiment could be, to find a custom gradient estimator in the literature, which has been shown to improve validation accuracy over STE, replicate the results and then demonstrate that both perform equally with proper adjustments. The authors do not provide such a comparison, raising questions about whether the custom estimator was applied correctly in their experiments or if its potential advantages were overlooked.\n\n\nMinor comments:\n\nline 245: Typo: $Q(w_{\\hat{Q}}^{(t)})=Q(w_{\\hat{Q}}^{(t)})$\n\nlines 329-340: Equations (8) to (12): \"$\\nabla$\" missing before $f^{(t)}_{STE}$\n\nlines 502-504: Missing figure number for Figure 3"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We prove and empirically demonstrate that custom gradient estimators are equivalent to the straight-through estimator for quantized neural network optimization."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024custom,\ntitle={Custom Gradient Estimators are Straight-Through Estimators in Disguise},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3j72egd8q1},\nnote={under review}\n}"
},
"abstract": {
"value": "Quantization-aware training comes with a fundamental challenge: the derivative of quantization functions such as rounding are zero almost everywhere and nonexistent elsewhere. Various differentiable approximations of quantization functions have been proposed to address this issue. In this paper, we prove that a large class of weight gradient estimators is approximately equivalent with the straight through estimator (STE). Specifically, after swapping in the STE and adjusting both the weight initialization and the learning rate in SGD, the model will train in almost exactly the same way as it did with the original gradient estimator. Moreover, we show that for adaptive learning rate algorithms like Adam, the same result can be seen without any modifications to the weight initialization and learning rate. These results reduce the burden of hyperparameter tuning for practitioners of QAT, as they can now confidently choose the STE for gradient estimation and ignore more complex gradient estimators. We experimentally show that these results hold for both a small convolutional model trained on the MNIST dataset and for a ResNet50 model trained on ImageNet."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"quantization",
"deep learning",
"optimization"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/c745d2831deb070851b86757bc9435cdd4da1dfd.pdf"
},
"presentation": null,
"primary_area": {
"value": "optimization"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Custom Gradient Estimators are Straight-Through Estimators in Disguise"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3jRzJVf3OQ | Quantum entanglement for attention models | main | Active | Attention models;Quantum entanglement;Transformers | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 3;3;6;6 | 4;5;4;3 | 2;3;2;3 | 2;2;2;3 | 2;2;2;3 | 4.5 | 4 | 2.5 | 2.25 | 2.25 | -0.707107 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The model (entanglement entropy) is giving 100% test accuracy on the MC dataset in Table 1. Is there any explanation for this?\nIn table 1, in the QSANN model, when only CLS token was used the test accuracy dropped from 100 to 56%. What could have been the possible reason?\nWhy was the comparison only with QSANN model?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Novelty: The paper proposes entanglement based attention and novel methodology for computing attention. Three measures of entanglement(Entanglement entropy, SWAP test and concurrence) are used for measuring the entanglement between the queries and the value vectors. The proposed method is evaluated in both classical and quantum datasets. The proposed model was compared with scaled-dot-product attention and another quantum attention method QSANN model for various vision and NLP datasets. \nSoundness: The paper is well written. The paper claimed that the quantum entanglement based attention is having a better generalization gap across all datasets. The experimental results of the paper supports this claim. Experiments were conducted extensively on various NLP and vision datasets with clear figures and tables. \nSignificance: With limited number of works done on quantum computing w.r.t Transformers, this work has relevance and future applications\nRelation to prior works: The previous related works are discussed comprehensively in this paper. \nReproducibility: The authors have provided the source code of the experiments"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper incorporates quantum entanglement into the attention mechanism of a Transformer encoder by using a measure of entanglement to compute the attention matrix."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Clarity: In section 4.3, the authors have defined various methods of entanglement. But how these methods are applied in regard to self attention is not clear. The authors have stated the methods used for computing key, query quantum states and attentions, but exactly how it is done on mathematical terms is not defined. A mathematical expression of the measures of entanglement for computing the attention, would have made it clearer. In Figure 5, there is no explanation of how the parameterized quantum circuit is generated.\nThe transformer model consists of only two sequential attention layers. Eventhough the performance of the model with varying data sizes were studied, the performance of the model with varying model sizes have not been studied. Is the model underperforming on larger datasets because of smaller model size?\nA qualitative analysis of the behavior of attention maps and the interactions between various positions, if included, would have given a better understanding of the model."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weakness."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper is easy to follow.\n- Introducing quantum in classical computers is meaningful and interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the potential of quantum entanglement for attention in Transformers. Authors use the entanglement of quantum states as a co-relation criterion for the Attention layer in the Transformer. The method is evaluated on both language tasks, vision tasks, and quantum datasets. Experiments show the potential of quantum entanglement attention to have better generalization ability."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Dataset used for experiments is too small. The transformer is a large-scale model that requests large-scale data to learn meaningful features. Besides, quantum entanglement attention shows its outperformance from Figure 2 when the size of the dataset is less than 1000, which is not practical.\n- The paper claims that quantum entanglement attention has better generalization ability, which is the difference between train and test accuracy. However, as stated above, the transformer is a large-scale model that requests large-scale data, which means the transformer would easily overfit in small datasets. This has resulted in poor accuracy of transformers on small datasets. For example, in CV tasks, transformers generally require 200-300 epochs on ImageNet to match the accuracy of CNN (which also requires the corresponding number of epochs), while in CIFAR datasets, transformers require 7200 epochs to match the accuracy of CNN, which only requires 200 epochs.\n- It's vital to visualize or analyze the attention of quantum and classical. If the elements of quantum attention matrix is all same, the transformer treats all token equally, which means the transformer model is about to degenerate into an MLP which could generalize better than transformer when dataset is small. It's hard to conclude that the benefit is coming from quantum entanglement operation.\n\n\n- The details of the transformer model should have a description. What's the dimension of the transformer? How many blocks do you stack? \n- The details of datasets for experiments should have a more clear description, e.g., MC and RP datasets.\n- The details of training should have a description. How many epochs? How do you train the transformer?\n- The illustrations in the paper should be improved. The current illustration is confusing and the content is not clear, especially Fig 1."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Given that current quantum hardware is noise-prone, how do the authors envision the entanglement-based attention mechanism performing in noisy conditions? Are there plans to test this model in simulated noisy environments or on NISQ devices to verify its stability?\n\n2. Although this paper proposes using entanglement entropy for a quantum implementation of the attention mechanism, it lacks an in-depth analysis of the theoretical foundation and effectiveness of this approach. It is recommended that the authors enhance the theoretical exploration of the role of quantum entanglement in the attention mechanism, especially by explaining from a quantum information perspective why it performs exceptionally well on certain tasks. Additionally, a discussion on the theoretical basis and advantages of Hilbert space (related to the Quantum Feature Map, QFM) and the parameter efficiency of quantum models compared to classical models would be beneficial.\n\n3. Can the authors include a table comparing the parameters and architectures of the quantum and classical models to clarify any computational trade-offs? This would help readers understand the efficiency and scalability implications of the proposed approach.\n\n4. Have the authors considered evaluating the entanglement-based attention mechanism against other classical models, such as MLPs, to provide a broader baseline comparison? This could clarify whether the quantum approach offers unique benefits over simpler classical architectures.\n\n5. Several relevant works on quantum self-attention and quantum Transformer models are missing from the current paper. Could the authors consider adding the following references to provide additional context and background on prior work in this area?\n\nShi, Shangshang, et al. \"A natural NISQ model of quantum self-attention mechanism.\" *arXiv preprint arXiv:2305.15680* (2023).\nShi, Jinjing, et al. \"QSAN: A near-term achievable quantum self-attention network.\" *arXiv preprint arXiv:2207.07563* (2022).\nDi Sipio, Riccardo, et al. \"The dawn of quantum natural language processing.\" *ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*. IEEE, 2022."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "This paper presents an innovative approach that integrates quantum entanglement into the Transformer’s attention mechanism, using entanglement entropy to calculate attention coefficients. The method demonstrates improved generalization and reduced overfitting on small classical and quantum-generated datasets, providing a robust evaluation against classical and other quantum attention models. This work contributes to quantum-classical hybrid models, showing potential in data-limited applications and opening avenues for further exploration in quantum-enhanced machine learning."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces an approach that integrates quantum entanglement into the attention mechanism of Transformer models, proposing an entanglement-based attention layer. By encoding query and key vectors as quantum states, entangling them through a parameterized quantum circuit (PQC), and using entanglement entropy to calculate attention coefficients, the method aims to enhance Transformer performance on specific tasks. Experimental results demonstrate that this quantum-based attention layer outperforms classical attention on smaller classical datasets and quantum-generated datasets, showing a superior generalization gap and reduced tendency to overfit. The work provides valuable insights into leveraging quantum properties within classical machine learning frameworks, especially for data-limited applications, and contributes to the emerging field of quantum-inspired hybrid models. This research lays the groundwork for further exploration of quantum resources as subroutines in machine learning models, particularly Transformers, offering new possibilities for performance improvements in specialized scenarios."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper does not address the impact of noise on the proposed quantum model, which is crucial given the current limitations of noisy intermediate-scale quantum (NISQ) hardware. Quantum systems are inherently sensitive to noise, and without examining how noise affects the model’s performance, it is unclear whether the proposed entanglement-based attention mechanism can be effectively implemented on real hardware. To improve the practical relevance, I recommend adding noise simulations or discussing how hardware noise might affect entanglement performance in attention mechanisms, which would make the work more applicable to real-world quantum devices.\n\n2. Although the authors introduce entanglement entropy for the attention mechanism, the paper lacks a rigorous theoretical foundation to explain why entanglement specifically improves generalization in small-data scenarios. There is little discussion on the advantages of Hilbert space representations (related to quantum feature mapping, QFM) or why quantum entanglement should provide performance benefits over classical models, especially from a quantum information perspective. I recommend that future work include a deeper theoretical exploration of the role of quantum entanglement in attention mechanisms. This could involve discussing Hilbert space properties, parameter efficiency, and the specific benefits of quantum versus classical models, to clarify the approach’s underlying strengths and limitations.\n\n3. The paper does not provide a detailed comparison of parameters between the quantum and classical models, which could help clarify the computational trade-offs of the proposed approach. Including a summary table of model configurations and hyperparameters would enhance transparency, allowing readers to better understand the computational costs associated with each method.\n\n4. The paper only compares its entanglement-based attention mechanism with a simplified Transformer model. It would be helpful to compare against other classical models, such as MLPs, to demonstrate the quantum model’s relative performance more comprehensively.\n\n5. The paper does not reference several recent works that are highly relevant to quantum self-attention and Transformer models. Key papers, such as Shi et al. (2023), Shi et al. (2022), and Di Sipio et al. (2022), explore similar mechanisms and should be cited for completeness. These references would provide additional context and underscore where this work contributes new insights to the existing literature."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "A few concerning points are listed as follows, and I hope the authors could clarify these before I change my mind in this paper's decision.\n\n1. How about the scalability/complexity of this model, or, what is the scaling with respect to the vector size?\n\n2. Can it show more concrete relations between quantum entanglement and enhancement, to evaluate whether stronger entanglement leads to stronger model performance?\n\n3. If the quantum circuit size becomes larger, will the quantum model keep its advantage on classical datasets as in Figure 2?\n\n4. There have been existing papers differently adapting attention mechanisms, such as Cherrat and Kerenidis (Quantum 2024), Ren-Xin Zhao (IEEE Trans. Pattern Anal. Mach. Intell. 2024), and Khatri (arXiv:2406.04305). What is your advantage or novelty compared to their works?\n\n5. Could you please provide a resource analysis including time complexity, qubit number, or number of measurements...\n\n6. Please clarify several concepts, including \"learning rate scheduler\", \"data reuploading layers\" In Appendix A. \n\n7. What is the number of trainable parameters in this model?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The attention mechanism is a cornerstone of modern machine learning, and the potential enhancements offered by quantum computing are compelling.\n2. Exploring the synergy between quantum computing capabilities and entanglement is valuable, and this paper provides promising numerical evidence.\n3. The quantum circuits are relatively simple and could likely be implemented in near-term quantum computers."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents an entanglement-based attention layer integrated into a classical Transformer, where the traditional dot product operation between query and key vector pairs is replaced by a quantum feature map circuit and an entanglement measurement. Leveraging quantum circuits introduces quantum entanglement into the attention mechanism. Numerical experiments indicate that entanglement entropy outperforms other entanglement metrics, and the entanglement-based layer demonstrates advantages over its classical counterpart in classification tasks within vision and NLP domains. For both quantum-generated and classical datasets, the model shows improvements in classification accuracy and a reduced generalization gap."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Circuit size: The model proposed in this paper involves a simulated quantum circuit with only 6 qubits; meanwhile, the quantum circuit in Figure 5 is too simple, introducing only local entanglement. Experiments with more qubits (such as 10~20) could significantly improve the soundness of the paper.\n\n2. Motivations: The introduction talks a lot about well-known concepts, but the motivations or insights to replace the dot product with entanglement in the attention mechanism are not sufficiently discussed.\n\n3. Efficiency: My understanding is that the entanglement measurement needs to be performed as many times as the number of attention coefficient matrix elements, which is quite inefficient. The algorithmic/time complexity should be explicitly discussed in this paper.\n\n4. Missing details: There is no specific explanation for the query state and key state; are they row or column vectors of Q and K? Detailed circuit implementations should be provided. Also, the complexities of the different entanglement measurements are not compared in Section 4.3.\n\n5. Concerns about model performance. The numerical results in Figure 2 suggest that the classical model performs better as the sample size increases, potentially diminishing the practical value of this model. I wonder if the small size of the quantum circuit limits performance when using large sample sizes. The training curves are not stable, which could be improved by adjusting hyperparameters. Is there any reason behind the instability of the training curve?\n\n6. Citation mistakes. The introduction refers to 'Systematic benchmarking of existing quantum approaches suggests that entanglement may not play a significant role' but cites no paper. In Section 4.1, QFM methods are reviewed but not proposed by Khan et al. (2024). In Section 4.3, Quantum State Tomography lacks citation, where a typo occurs (FST)."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We study quantum entanglement entropy in attention models."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024quantum,\ntitle={Quantum entanglement for attention models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3jRzJVf3OQ},\nnote={under review}\n}"
},
"abstract": {
"value": "Attention mechanisms in deep learning establish relationships between different positions within a sequence, enabling models like Transformers to generate effective outputs by focusing on relevant input segments and their relations. The performance of Transformers is highly dependent on the chosen attention mechanism, with various approaches balancing trade-offs between computational cost, memory efficiency, and generalization ability based on the task.\n\nQuantum machine learning models possess the potential to outperform their classical counterparts in specialized settings. This makes exploring the benefits of quantum resources within classical machine learning models a promising research direction. The role of entanglement in quantum machine learning, whether in fully quantum or as subroutines in classical-quantum hybrid models, remains poorly understood. In this work, we investigate whether quantum entanglement, when used as a resource, can improve the performance of the attention layer in Transformers.\nWe introduce an entanglement-based attention layer within a classical Transformer architecture and numerically identify scenarios where this hybrid approach proves advantageous. Our experiments on simple standard classification tasks in both vision and NLP domains reveal that the entanglement-based attention layer outperforms classical attention, showing superior generalization on quantum-generated datasets and in settings with limited training data for classical datasets. Additionally, it demonstrates a smaller generalization gap across all tested datasets. Our work contributes towards exploring the power of quantum resources as a subroutine in the classical-quantum hybrid setting to further enhance classical models."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Attention models",
"Quantum entanglement",
"Transformers"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/c06efeae808fb8191f327d232a0b3b4a195d9275.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Quantum entanglement for attention models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3jvgm61l9S | MathScape: Evaluating MLLMs in Multi-modal Math Scenarios through a Hierarchical Benchmark | main | Active | Multimodal Large Language Models;Math Ability;Benchmark | datasets and benchmarks | 3;3;5;6 | 4;4;4;4 | 2;2;3;3 | 2;2;2;2 | 2;1;3;3 | 4.25 | 4 | 2.5 | 2 | 2.25 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Please see above."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. MathScape contains 1325 high-quality human-collected real multimodal mathematical problems. \n2. Authors conduct an analysis of the relationship between answer length and performance, which is interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed a new multimodal mathematical evaluation benchmark called MathScape, which consists of 1325 problems. MathScape combines both figures and mathematical text descriptions into images, which presents a challenge to multimodal large language models. This paper also introduced a two-stage evaluation method to evaluate long responses to math questions. They tested several MLLMs in different data-splitting methods to show results from different perspectives."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. In the first challenge, the author said no existing benchmarks have both the mathematical description and figures being captured together in a single image. However, in MathVerse, one category of questions does provide descriptions and figures together. For the second challenge, the author claims existing method cannot assess long-form responses. But MathVerse proposes a method to assess the correctness of each step of a chain-of-thought response. Authors should conduct a more comprehensive literature review of the multimodal mathematical evaluation domain.\n2. This paper is not well organized and written. This means it is not easy to read and understand. For example, section 3.1 is oversimplified. The authors did not mention where they collected the mathematics question and what is the original format of the question documents. Besides, it’s not clear what kinds of annotations are done. What is “knowledge-based classification”? \n3. The proposed two-step evaluation method heavily relies on LLM’s ability to decompose and judge the answer. This may cause some errors in the progress. Did the authors examine how accurate LLMs are on each of the evaluation tasks?\n4. For the evaluation part:\n 1. The model “GLM4V” is without citation, and it is an open-sourced model from my knowledge. (https://huggingface.co/THUDM/glm-4v-9b). Besides, the open-source models in Line 278 are not cited properly. These kinds of format errors cause the paper to be hard to read.\n 2. Some reference performance is not provided: e.g., frequent choice, random choice, and human performance.\n 3. DeepSeekV2 is not in the evaluation setup models, did you mean DeepSeek-VL? \n 4. The performance on proof questions is higher than on choice and solution questions. This is uncommon and the reason given by the authors is not convincing. They said, “The structured format and clear information in proof questions make them easier”. However, when testing models on different kinds of questions. The format of questions is supposed to be similar unless the question format (structure or non-structure) is the primary research topic. \n 5. The authors provide limited insights of the performance on MathScape. Results such as “the closed-source models are more accurate than open-source ones” reveal little information.\n5. MathScape claims that it is the first to combine both figures and mathematical text descriptions in a single image. What unique challenge does this format of data bring to models? Did the authors dive deep into analyzing the different challenges present by MathScape and other multimodal mathematical benchmarks?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "* Problem Sources and Copyright: The problems are stated to be collected from school exams and homework, which raises questions about the original sources and copyright status of these data samples.\n* Fair Compensation: The dataset collection process involved human reviewers for quality control, but it is unclear whether these reviewers received fair compensation for their work."
},
"flag_for_ethics_review": {
"value": [
"Yes, Legal compliance (e.g., GDPR, copyright, terms of use)",
"Yes, Other reasons (please specify below)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* Evaluation Method Validation: How does the proposed two-step evaluation method compare with traditional evaluation methods in terms of reliability and validity?\n* Token Limit Impact: How does the 2048-tokens generation limit affect the results, especially for verbose models? What percentage of responses are truncated by this limit?\n* JSON format output: The constraining model to output JSON format is known to decrease the quality of the generated content (e.g. https://arxiv.org/abs/2408.02442v1). Why the authors choose to stick to this method? What is the impact of such format constrains in current settings?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* Benchmark Size and Coverage: The dataset covers a wide range of topics and difficulty levels; 1.2k samples allow for a statistically significant assessment of MLLMs in each subject (except for equations).\n* Data Quality Control: Post-photo quality control and classification is great addition, allowing reviewers to filter unreadable inputs.\n* Evaluation Approach: The two-step evaluation method with sub-task scoring might reduce judgment errors and allows for more fine-grained analysis of the evaluation results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces MathScape, a new benchmark that evaluates the mathematical capabilities of Multimodal Large Language Models using photo-based math problems. Unlike previous benchmarks, MathScape integrates problem statements and visual elements within a single image using a print-and-photo or display-and-photo approach. The authors collected 1,325 images of school-level mathematical problems in multiple choice, free-form, and proof formats (38%, 56%, and 5% respectively). They evaluated 11 closed and open-weight Large Language Models and provided a case study. The results demonstrate that MathScape is challenging even for state-of-the-art models, particularly in the stage of extracting problem statement from image input."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* Insufficient Dataset Details: More comprehensive information about the dataset’s creation, sources, human annotators education level and potential biases would strengthen the paper.\n* Limited Language Scope: The focus on Chinese problems limits the applicability of the benchmark to other languages and educational contexts. (Please clearly state the language scope in the abstract and/or in the introduction).\n* Evaluation Method Reliance on LLMs: Using LLMs for scoring may introduce biases, as these models may share similar limitations with the models being evaluated. The judgment error is not addressed in the paper's results or case study.\n* Lack of Comparative Analysis: Given that all of the problems are available in textual format, the paper will benefit from including correlation analysis between original problems and photo-converted problems solve rate."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "More visualization results of the evaluation process can also help to understand the proposed evaluation strategy."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The data collection process is delicate and clearly stated with clear figures.\n\n2. The classification process of math problems are well defined and reasonable.\n\n3. The author provide detailed analysis of accuracy and answer length. This provides some insights to future math MLLMs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes MathScape, a new benchmark for multimodal math problems to evaluate the related capabilities of MLLMs. The collected datasets contain images with both math figures and questions. The author also uses a two-step evaluation method to first extract answer and then judge the correctness using LLMs. The author evaluate different MLLMs on this new benchmark with detailed analysis."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The contribution of this paper is overclaimed. To the best knowledge, MathVerse contains six versions of a problem and the 'vision-only' one also contains both math figures and question in the image, similar to the contribution of this paper.\n\n2. The two-step evaluation cannot be viewed as an important contribution, since MathVista also uses an LLM (ChatGPT) to extract answers from the free-form response of models as the first evaluation stage.\n\n3. The evaluation of some math-domain MLLMs is missing on MathScape, for example: G-LLaVA and Math-LLaVA.\n\n4. Human performance is needed on MathSacpe for better reference."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "no"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Based on Weakness 1, please elaborate on the most significant contribution of this benchmark, compared to existing multimodal math reasoning benchmarks. You can ignore the parallel research, but I think the related work is not comprehensive yet.\n\n2. I think some of current MLLMs may suffer from different lingual contexts. Therefore, is it possible to expand your work to English problems, or explore the performance difference between Chinese and English.\n\n3. The evaluation part should include GPT-4o if possible. Besides, it should dive deeper into analysis of bad case category proportions and more bad case analysis.\n\n4. I wonder if geometric problems are the hardest type, as it also needs a more complex visual perception of specific components such as angles and lines. \n\n5. The performance tables need to include parameter size for each open-source models. Also, a scaling analysis is needed if possible."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.Originality: The paper presents MathScape, an innovative benchmark that combines real-world math problems captured in images with their correct answers, closely mirroring real-world scenarios and providing a more comprehensive assessment of MLLMS.\n\n2.Quality: The benchmark covers a wide range of difficulty levels, question types, and knowledge areas, which is commendable. \n\n3.Clarity: The paper is structured with clear explanations of the benchmark construction process, evaluation approach, and results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a new benchmark termed MathScape for assessing the capabilities of Multimodal Large Language Models (MLLMS) in solving mathematical problems that involve both visual and textual information. MathScape addresses the gap in existing benchmarks by offering a more realistic testing environment with image-based math problems. The benchmark is designed to evaluate the theoretical understanding and application ability of MLLMS through a categorical hierarchical approach. Finally, the paper reports on a multi-dimensional evaluation of 11 advanced MLLMS, revealing the challenges posed by the benchmark and identifying current limitations of these models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. I think authors should be aware that except for those previous works you mentioned, there are many other mathematical reasoning benchmarks this year [1,2,3,4,5], especially with a similar focus on multimodal reasoning. Hence, two of your contributions (New Persepective and New Benchmark) may lack novelty. Besides, New Method (i.e., how you construct and evaluate) is a fair but not strong contribution to the MLLM community. \n\n2. The paper indicates that the dataset primarily consists of Chinese problems. I think this will narrow the contribution as well. Besides, educational levels (i.e., primary/middle/high school) are highly different between China and Western countries. So it is better if you can address this limitation, such as including a comparison of educational standards or proposing how the benchmark could be adapted for different educational systems.\n\n3. The analysis is not sufficient for benchmark work. For example, we need to know the proportion of diverse reasons why the best model provides incorrect answers (e.g., failure to retrieve the visual information; misunderstanding of positioning; etc.) in both the whole dataset and each dimension. Furthermore, more bad cases are needed.\n\n4.The evaluation focuses on a set of state-of-the-art models, but it might be beneficial to include GPT-4o, which has been proven for its effectiveness for complex reasoning. Besides, math-specific MLLMs should be included as well, since you also mentioned them in your related works.\n\nReferences:\n\n[1] We-Math: Does Your Large Multimodal Model Achieve Human-like Mathematical Reasoning?\n\n[2] IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations\n\n[3] CMM-Math: A Chinese Multimodal Math Dataset To Evaluate and Enhance the Mathematics Reasoning of Large Multimodal Models\n\n[4] CMMaTH: A Chinese Multi-modal Math Skill Evaluation Benchmark for Foundation Models\n\n[5] ErrorRadar: Benchmarking Complex Mathematical Reasoning of Multimodal Large Language Models Via Error Detection"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024mathscape,\ntitle={MathScape: Evaluating {MLLM}s in Multi-modal Math Scenarios through a Hierarchical Benchmark},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3jvgm61l9S},\nnote={under review}\n}"
},
"abstract": {
"value": "With the development of Multimodal Large Language Models (MLLMs), the evaluation of multimodal models in the context of mathematical problems has become a valuable research field. Multimodal visual-textual mathematical reasoning serves as a critical \nindicator for evaluating the comprehension and complex multi-step quantitative reasoning abilities of MLLMs. However, previous multimodal math benchmarks have not sufficiently integrated visual and textual information. To address this gap, we proposed MathScape, a new benchmark that emphasizes the understanding and application of combined visual and textual information. MathScape is designed to evaluate photo-based math problem scenarios, assessing the theoretical understanding and application ability of MLLMs through a categorical hierarchical approach. We conduct a multi-dimensional evaluation on 11 advanced MLLMs, revealing that our benchmark is challenging even for the most sophisticated models. By analyzing the evaluation results, we identify the limitations of MLLMs, offering valuable insights for enhancing model performance."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Multimodal Large Language Models",
"Math Ability",
"Benchmark"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/66a6faa7becf9b5c1af605bc045e8424c2a65215.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "MathScape: Evaluating MLLMs in Multi-modal Math Scenarios through a Hierarchical Benchmark"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3kADTLbKmm | SparseDM: Toward Sparse Efficient Diffusion Models | main | Active | Diffusion models;sparse pruning;2:4 sparsity | generative models | 3;3;5;5 | 4;3;4;4 | 1;2;2;3 | 1;2;3;2 | 3;3;2;3 | 4 | 3.75 | 2 | 2 | 2.75 | 0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Could the authors provide the parameter counts for each layer of the SD model before and after sparse pruning?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The writing is very clear, and the main idea is highlighted effectively."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a pruning strategy for Diffusion models, using mask pruning to achieve progressive multi-step pruning. Ultimately, it realizes 1:2 pruning according to the Ampere architecture. During training, knowledge distillation is used to transfer knowledge from the full model to the pruned model."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The pruning strategy is based on existing structures, with a relatively simple motivation. There are already other methods that achieve similar results, such as using linear attention or directly training a smaller model with distillation. \n2. Compared to directly using STE-based pruning, it does not further reduce the computational load.\n3. In Section 3.2, \"Transfer learn sparse diffusion models\" strategy is mentioned, but it does not explain the significant differences between this strategy and the progressive sparse training strategy discussed in Section 2.2. If the focus is solely on testing with perturbed datasets, it may not constitute a significant contribution.\n4. A generalized pruning strategy suitable for Transformer networks has not been proposed; simply relying on data perturbations is insufficient to demonstrate applicability to other datasets. Further testing on additional datasets, such as CelebA-HQ, LSUN Church, would be beneficial.\n5. Many of the latest comparative algorithms from 2024 are not mentioned, such as \"Pruning for Robust Concept Erasing in Diffusion Models\" and \"LD-Pruner: Efficient Pruning of Latent Diffusion Models using Task-Agnostic Insights.\"\n6. There is no comparison of the parameter counts for each layer of the SD model before and after sparse pruning. It is recommended to include a chart in the appendix to illustrate this.\n7. While Section 2.3 mentions applying perturbations to the dataset, it does not provide specific details on how the perturbations were implemented.\n8. The experiments only validate the FID score as a single metric; it is advisable to explore additional metrics, such as SSIM."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.\tThe authors mentioned that “it does not mean that the greater the sparsity, the better the FID”. Please discuss the reason and why you choose 2:4 sparse.\n2.\tPlease discuss the reason ASP performs so worse in all experiments.\n3.\tPlease also clarify why your method and STE-based pruning fulfill the same MACs.\n4.\tPlease explain the reason that the FID of the proposed method in Fig. 3a obtain a lower FID in the first several steps.\n5.\tWhy the initial FID of 2:4 sparse in Fig.3b and Fig.3d is different?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.\tThis paper is well-written.\n2.\tThe motivation is clear enough.\n3.\tThe organization of this paper is great."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes to improve the efficiency of DM by sparse matrix for 2:4 sparse acceleration GPU. The authors improve the STE method and propose to gradually transfer knowledge from dense models to sparse models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThere is a typo in Eq5. Please also check all equations. Moreover, not all symbols have been explained.\n2.\tThe experiments are relatively limited. Specifically, only two U-ViT and DDPM are tested on the proposed pruning, which are proposed in 2022 and 2020 respectively. More recently proposed DiT or other methods should also be included.\n3.\tThe limitation and discussion are missing in this paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please address the weakness stated above."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "- The 2:4 sparse model calculation offers practical values for practitioners using NVIDIA Ampere architecture GPUs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work aims to reduce the computation of Diffusion Models during inference. The authors suggest a method of straight-through estimation, which applies sparse masks to layers of a pretrained diffusion model and then employs transfer learning for training. Then, they use the same sparse mask during inference to improve compute efficiency."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- While it may have some practical value for practitioners using NVIDIA Ampere architecture, the same technique may not benefit other practitioners or general researchers without access to Ampere architecture. \n\n- Besides, the straightforward idea of using masked training is neither interesting nor technically new. \n\n- More disappointingly, the speed acceleration due to this customized training for a particular architecture increases by x1.2 only. Studies related to reducing time steps for Diffusion inference or diffusion quantization/pruning methods may be more effective in achieving the same purpose."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* In Table 3, some variants (e.g., patch size = 2 and mlp_ratio = 2) are slower than the dense model, why do you think this is?\n* I think it would strengthen the effectiveness of SparseDM if the author show that it can also be applied to models like Stable Diffusion."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The paper introduces a simple fine-tuning method that converts existing diffusion models into sparse models, enabling them to be used in scenarios with limited computing power, such as on mobile devices.\n\n* The observations about fixed sparse training are interesting.\n\n* Experiments on various generation scenarios verify the effectiveness of SparseDM compared to baselines."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces SparseDM, which converts existing diffusion models into sparse models that fit in a 2:4 sparse operator on the GPU. Specifically, the authors propose a Straight-Through Estimator (STE)-based fine-tuning framework that learns sparse masks. These sparse masks accelerate GPU inference speed up to 1.2. Comprehensive experiments validate the effectiveness of the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Weakness 1: More clarifications on Section 2.3.**\n\nIn Section 2.3, the authors claim that diffusion models only consider the distribution shift of the noisy data while sparse pruning methods only consider the model's weight change. Then, referring to RFR, the authors convert the model's weight changes resulting from sparse pruning methods into data changes for the diffusion model's training process. However, typical diffusion models have indicators for perturbed data (such as the noise schedule and timestep embedding), and it is unclear how these relate to perturbations caused by sparse training.\n\n**Weakness 2: Lack of analysis of fixed sparse training**\n\nI am not sure why fixed sparse training would be more effective than traditional progressive sparse training. Based on the experimental results, it seems that fixed sparsity applies a consistent distribution shift across all noise levels in diffusion training, whereas progressive sparse training gradually shifts the predefined noise levels, which may hinder the diffusion training process. However, this claim has not been theoretically verified, so the authors should provide theoretical proof to demonstrate the relationship between diffusion training and sparse training."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024sparsedm,\ntitle={Sparse{DM}: Toward Sparse Efficient Diffusion Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3kADTLbKmm},\nnote={under review}\n}"
},
"abstract": {
"value": "Diffusion models have been extensively used in data generation tasks and are recognized as one of the best generative models. However, their time-consuming deployment, long inference time, and requirements on large memory limit their application. In this paper, we propose a method based on the improved Straight-Through Estimator to improve the deployment efficiency of diffusion models. Specifically, we add sparse masks to the Convolution and Linear layers in a pre-trained diffusion model, then transfer learn the sparse model during the fine-tuning stage and turn on the sparse masks during inference. Experimental results on a Transformer and UNet-based diffusion models demonstrate that our method reduces MACs by 50% while increasing FID by only 0.44 on average. Sparse models are accelerated by approximately 1.2x on the GPU. Under other MACs conditions, the FID is also lower than 1 compared to other methods."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Diffusion models",
"sparse pruning",
"2:4 sparsity"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/fc8524ef642e2c3611ca910a36dc747a9becf3a4.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/47c77c2cd47cc71808204e066cc809a38721c237.zip"
},
"title": {
"value": "SparseDM: Toward Sparse Efficient Diffusion Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3kiZ5S5WkY | Iterative Substructure Extraction for Molecular Relational Learning with Interactive Graph Information Bottleneck | main | Active | Molecular Relational Learning;EM Algorithm;Substructure Extraction;Interactive Graph Information Bottleneck | applications to physical sciences (physics, chemistry, biology, etc.) | 5;5;6;8 | 3;4;3;4 | 3;3;3;3 | 2;3;3;4 | 2;2;2;4 | 6 | 3.5 | 3 | 3 | 2.5 | 0.408248 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. Please justify your assumption stated at line 161.\n2. For Line 1224 Figure 5, why do you only choose to conduct the ablation study on the ChChMiner dataset? Ablation studies on larger datasets are needed.\n3. Following your design, IGIB-ISE should effectively identify the core substructure of molecules, why did the model not improve the classification accuracy more? As it reduces redundant information, why does it occupy a larger space? More analysis is needed to identify factors that may limit the improvement. What are the potential enhancement may be introduced to address these limitations?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.\tThis paper has good clarity. It is well-written with a clear structure. In a concise but informative style, readers would find it easy to understand the key concepts, backgrounds, and methods.\n2.\tTheir work also brings new insights into the MRL area. They noticed the inefficiency of current methods, where using the complete profile of an interacting molecule could not only be unnecessary but also comprises generalizability. And they proved the effectiveness of their method through experiments. \n3.\tIn general, they bring new ideas to the MRL area: Interactive Graph Information Bottleneck (IGIB). Bottleneck-based methods are widely used in many areas and receive satisfactory results. In this paper, they integrated it into the ISE framework for further optimization. It is also the method that leverages the model’s performance to outperform all baselines."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "To alleviate the problems in current methods of molecular relational learning: insufficient consideration of molecular interactions and failure to capture high-quality substructures, this paper introduces an IGIB (Interactive Graph Information Bottleneck)-ISE (Iterative Substructure Extraction) method. Their work achieves better performance than current SOTA models in terms of accuracy, generalizability, and interpretability."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\t(General Assumption) Most molecule interactions may depend on each molecule’s substructures, but does this apply to all molecule interactions? If not, the assumption at line 161 is somewhat arbitrary, where some edge cases could be ignored by this model. This assumption needs to be further justified. \n2.\t(Time and Space Complexity) While the model outperforms all the baseline models, it spends much more time processing DDI Datasets. Compared to CMRL, with around 1% accuracy improvement, this model costs 5.8 ~ 7.1x more time and 6.4 ~ 9x more space. This may lead to expensive computation. The trade off between the performance and computing cost needs to be examined. \n3.\t (Ablation Experiment) Most experiments are designed well, but the experiment in line 1224 is less persuasive. Among all the datasets for the drug-drug interaction prediction task, ChChMiner has the fewest data points. Besides, since molecular interaction prediction tasks are different from DDI, a separate experiment would be good. \n4.\t (Improvement) While IGIB-ISE achieves good performance, ISE fails to outperform all Category II methods in Table 1 (line 324) and some Category II methods in Table II (line 378). Also, the improvement of IGIB-ISE is not that noticeable in the classification task."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "In general, the paper is a solid contribution but the presentation should improve. Please answer to my questions above."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "(S1) The paper solves a timely problem and presents a sound solution that fully exploits the relationships among substructures.\n\n(S3) Due to its substructure alignment, IGIB-ISE outperforms previous techniques on several datasets.\n\n(S3) The method is well-motivated and builds on previous graph information bottlenecks, ELMO and expectation maximization."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper describes a method to improve molecular relational learning using information theoretical loss functions on a subgraph of the molecules. The technical contribution lies in the coupling of graph information bottlenecks with expectation maximization. The results show the approach's superiority both in deductive and inductive scenarios. The method is well-motivated, and the experiments are solid."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "(W1) Missing explicit objective function: The paper first explains the solution and then reaches the objective in Equation 8. I find this presentation counterintuitive. Why not present the objective first and then explain how to compute it?\n\n(W2) In the modelling of the graph there is no feature vector associated with nodes/edges. Are the graphs without attributes? Molecules should have information about the type of bonds among atoms.\n\n(W3) Notation without introduction: The paper uses notation without introducing it. Examples include:\n\n- $\\mathbf{Y}_\\mathcal{G}$\n- Line 216: the symbol *, is it a matrix multiplication?\n- $\\||$ in line 218\n\n(W4) If sim is symmetric cosine similarity, what is the need for computing both $sim(F_1, F_2)$ and $sim(F_2, F_1)$?\n\n(W5) It is not clear how Eq. 5 ensures that the two structures are aligned since $H_1$ and $H_2$ refer to two different embeddings spaces, or is the alignment enforced by the two matrices $I_{12}, I_{21}$? Please explain and motivate.\n\n(W6) What is the Gumbel sigmoid and how does it help in this case?\n\n(W7) It is not clear whether Eq. 16 is a lower bound on Eq. 8 or what is the relationship with Eq. 8? Is that an approximation or a heuristic? This aspect should be clarified in the text.\n\n(W8) In Figure 4, the focus of the network substantially changes over iteration. This seems to indicate that the method struggles with convergence. Is that expected or is it a sign of instability?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "**1. The discussion of the limitations of Category II methods is confusing.** \n- It is understandable that core substructures often play a crucial role in molecular interactions. But, Figure 1 (a) does not deliver a relevant message to support this argument. \n- In addition, from Figure 1 (a), it is unclear why integrating the complete profile of an interacting molecule into the substructure generation can be overwhelming. \n- It's unclear why Category II carries the risk of compromising generalizability. After reading the cited paper [1], it's still very confusing. There is no clear evidence from [1] to support this statement. \n- It's unclear why the authors mention \"Activity Cliffs\" here. \n\n**2. Limited Discussion of Method Robustness.**\nAs an interactive method, what happens if the EM algorithm finds optimal solutions during iteration? The lack of guidelines for selecting optimal iteration numbers based on dataset characteristics leaves important practical questions unanswered.\n\n**3. Technical Clarity Issues.** \n- Line 160, what is Y_G? Should it be Y?\n- In Tables 6-7, your method should be named ISE-IGIB or IGIB-ISE?\n\n**4. Computational Overhead.** \n- Tables 6 and 7 show IGIB-ISE takes more than 700% execution time and 1000% memory compared to one baseline DSN-DDI, with around 1.5% DDI performance improvement. I don't appreciate such results. The authors do not sufficiently address this limitation or propose potential optimizations. \n- The experiments focus on relatively small molecules. There is no discussion or analysis of how the method scales with molecular size, which is important for applications involving larger molecules. \nThe memory requirements (Table 6-7) suggest potential scaling issues.\n\n[1] Mechanisms of drug combinations: interaction and network perspectives"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper presents a novel approach to molecular interaction learning. Rather than handling entire molecular structures or extracting substructures independently, it introduces an iterative refinement process guided by molecular interactions. \n\n- Using EM algorithms for substructure extraction is creative, treating substructures as latent variables that get refined through iterations. This is a fresh perspective on the molecular interaction learning problem.\n\n- This work has a substantial potential impact on drug discovery and materials science. The ability to identify and understand interacting substructures between molecules is crucial for these fields."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces the Iterative Substructure Extraction (ISE) framework for molecular relational learning, addressing how molecules interact through their core substructures. The framework combines an Expectation-Maximization algorithm for iterative refinement with a new Interactive Graph Information Bottleneck (GIB) theory to ensure extracted substructures are minimal yet influential. Through experiments on datasets covering both regression and classification tasks, the combined IGIB-ISE approach demonstrates improved accuracy and interpretability compared to existing methods for predicting molecular interactions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The discussion of the limitations of Category II methods is confusing.\n\nLimited Discussion of Method Robustness. \n\nTechnical Clarity Issues. \n\nComputational Overhead."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Can the authors validate the interactions between multi-molecule interactions?\n\n2. Why the interaction is computed as $H_1=F_1^{(1)}||F_1^{(2)}$?\n\n3. The way to extrapolate the core substructure is **very similar to [1]**. What's the difference between this paper and [1]?\n\n4. What's the complexity of the method? Can you compare the training and inference time with baselines?\n\n5. Can you validate your method on larger datasets?\n\n[1] Capturing substructure interactions by invariant Information Bottle Theory for Generalizable Property Prediction"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper introduces an innovative method for core substructure extraction using the EM algorithm, which effectively captures molecular interactions.\n\n2. IGIB theory ensures a precise and compact extraction of interactive substructures.\n\n3. The method is extensively validated across various molecular relational learning tasks, including drug-drug interaction and solvation energy prediction, showing clear improvements over state-of-the-art methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a framework called ISE to improve MRL by focusing on the interaction between core substructures of molecules. The model iteratively refines the core substructures using the EM algorithm. Additionally, the IGIB theory is proposed to capture minimal but most influential substructures, enhancing the efficiency and generalizability of the extraction process. Through extensive experiments, the IGIB-ISE framework demonstrates superior performance compared to existing methods in terms of accuracy, generalizability, and interpretability for molecular interaction prediction tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Some parts of this work is very similar to [1]**. The key idea and many formulas are similar. For example, they all utilize similar methods to extrapolate core substructures (Section 3.4 in this paper and Section 3.2 in [1]). The only difference here seems to be this paper extrapolates the core substructure from a pair of graphs while [1] extrapolates the core substructure from one graph.\n\n1. The framework is validated on interactions between two molecules. It does not extend to more complex scenarios like multi-molecule interactions, which are important in real-world biochemical environments.\n\n2. The method requires more iterations, increasing resource consumption and time. This may limit its scalability for very large datasets or complex molecular systems.\n\n\n\n[1] Capturing substructure interactions by invariant Information Bottle Theory for Generalizable Property Prediction"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024iterative,\ntitle={Iterative Substructure Extraction for Molecular Relational Learning with Interactive Graph Information Bottleneck},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3kiZ5S5WkY},\nnote={under review}\n}"
},
"abstract": {
"value": "Molecular relational learning (MRL) seeks to understand the interaction behaviors between molecules, a pivotal task in domains such as drug discovery and materials science. Recently, extracting core substructures and modeling their interactions have emerged as mainstream approaches within machine learning-assisted methods. However, these methods still exhibit some limitations, such as insufficient consideration of molecular interactions or capturing substructures that include excessive noise, which hampers precise core substructure extraction.\nTo address these challenges, we present an integrated dynamic framework called Iterative Substructure Extraction (ISE). ISE employs the Expectation-Maximization (EM) algorithm for MRL tasks, where the core substructures of interacting molecules are treated as latent variables and model parameters, respectively. Through iterative refinement, ISE gradually narrows the interactions from the entire molecular structures to just the core substructures.\nMoreover, to ensure the extracted substructures are concise and compact, we propose the Interactive Graph Information Bottleneck (IGIB) theory, which focuses on capturing the most influential yet minimal interactive substructures. In summary, our approach, guided by the IGIB theory, achieves precise substructure extraction within the ISE framework and is encapsulated in the IGIB-ISE}\nExtensive experiments validate the superiority of our model over state-of-the-art baselines across various tasks in terms of accuracy, generalizability, and interpretability."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Molecular Relational Learning",
"EM Algorithm",
"Substructure Extraction",
"Interactive Graph Information Bottleneck"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/ea31c56fe8c87654bf3f849f4c6226f7c7b4a910.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to physical sciences (physics, chemistry, biology, etc.)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Iterative Substructure Extraction for Molecular Relational Learning with Interactive Graph Information Bottleneck"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3ktyyYGLxB | Commute Graph Neural Networks | main | Active | Graph Neural Networks;Message Passing;Commute Time;Node Classification | learning on graphs and other geometries & topologies | 3;5;5;8 | 4;4;4;4 | 1;2;2;3 | 2;2;3;3 | 1;3;3;4 | 5.25 | 4 | 2 | 2.5 | 2.75 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1] The derivation of your DiLap operator appears to be flawed. In particular, in the second line of Equation (12) when you pull $s_i$ out of the sum, the term $s_i$ arises $d_i^{out}$ times and therefore, Line 2 should be $s_i - \\frac{1}{d_i^{out}} \\sum s_j.$ The correction of this error means that you should be working with the random walk Laplacian (see e.g. [1]), which would be far more intuitive. To me it seems that for it to be possible to accept this paper at ICLR, the derivation of your operator needs to be corrected and the subsequent experiments should be adjusted. \n\n2] I am unsure what adjacency matrix you use in Line 314 to sparsify the matrix $\\tilde{\\mathcal{C}}.$ Are you using the adjacency matrix corresponding to the graph in which you have added the node feature similarity edges? And if not, could you hypothesise how severe the impact may be of calculating the commute times on a rewired graph and to then message pass with the original graph. \n\n3] Your method appears to be relatively memory intensive. In particular, you seem to require the evaluation of the exponential function fo the dense matrix $\\tilde{\\mathcal{C}}$. Empirical evaluation of, not only the time, but also memory complexity of your method in comparison to your baseline methods would be very valuable. \n\n4] The ablation study in Table 2 is very interesting! I think it should be extended in scope to also extend to homophilic datasets and to also include other commonly used message passing operators, such as the symmetrically normalised adjacency matrix used in the GCN and the PageRank matrix used in the PPrGo model [2].\n\n5] It does not seem sensible to me to compare your sparisfied commute time based CGNN to the CGNN$_{ppr}$ using the dense PageRank matrix. It seems to be fairer to me to either compare dense versions of both matrices or sparse versions of both matrices. In particular, since you sparify your commute time matrix with the adjacency matrix, it would be interesting to compare your model to a PageRank-based scheme, where the PageRank matrix is also sparsified with the adjacency matrix. \n\n6] Minor comments:\n\n6.1] The abbreviation \"SPD\" is used in Line 45 before its definition. \n\n6.2] The contributions of your paper are not explicitly listed. \n\n\n[1] Von Luxburg, U., 2007. A tutorial on spectral clustering. Statistics and computing, 17, pp.395-416.\n\n[2] Bojchevski, A., Gasteiger, J., Perozzi, B., Kapoor, A., Blais, M., Rózemberczki, B., Lukasik, M. and Günnemann, S., 2020, August. Scaling graph neural networks with approximate pagerank. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 2464-2473)."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The rewiring scheme that you propose is simple, but rather nice in my opinion. It would be interesting to see further study of its impact on the overall graph structure. \n\n- Your proposed CGNNs are compared to a comprehensive set of baseline models, which great to see. \n\n- The analysis of the application scope of your proposed model is very strong indeed and something that is generally not done enough in our literature."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In the submitted manuscript, the authors propose a novel digraph Laplacian, which is later used to more efficiently calculate the commute time of pairs of nodes. They furthermore propose a simple node feature based rewiring scheme, which allows them to ensure that the resulting graph gives rise to an aperiodic, irreducible Markov Chain, which has a unique steady state. The authors then propose to calculate commute times on this rewired graph, to transform these commute times by taking the exponential function of this matrix and subsequently sparsifying it with the adjacency matrix. This then allows the authors to propose a variant of the DirGNN, called CGNN, in which edges are reweighted by their transformed commute times. The authors finally evaluate the empirical performance of their CGNNs against a large variety of baseline models on a large number of datasets and find consistently good, although sometimes marginal performance improvements. They furthermore analyse these results and provide several insightful further experiments on runtimes and ablation studies of different model components."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Please find further details on my listed weaknesses in my questions below.\n- The proposed model boils down a weighting of the DirGNN by an efficiently calculated function of the commute times, which is a rather trivial change. \n- The derivation of the DiLap matrix appears to be flawed. \n- Your ablation studies could be improved and extended."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "As outline in the weaknesses above, can the authors provide a principled reason for why features aggregated from a node's neighbors should be weighed by the commute distance of the node to its neighbors? Can the authors provide any theoretical justification for using commute distance for such weighing?\n\nThe authors provide one figure where they show for Chameleon and Squirrel data sets that an adjacency matrix constructed from nearest neighbors weighted by their commute distance more closely resembles an adjacency matrix constrained to connecting nodes within the same class. Can the authors include other empirical measures of how the weighting of neighbors can improve performance of GNNs? Can the authors look at properties of longer hops (going beyond one-hop neighbors) and how information is aggregated across nodes within or outside the same class? Would it be possible to construct a synthetic data set that would shed light on the mechanism behind why they see an improvement?\n\nHow does the proposed model's performance change with depth? The authors claim that the weighted neighbors avoids the problem of aggregation of irrelevant information as depth is increased? It would be informative to see an empirical demonstration of this. The authors should plot the performance of their model as a function of model depth and compare with existing models.\n\nIn addition, can the authors apply their approach a real world problem on directed graphs that requires long range information transmission, such as power grids or traffic flow data? This would bolster the empirical support for their method.\n\nAs noted in the weaknesses above, can the authors try alternative methods for graph rewiring that also produce sparse graphs, such as constructing a kNN graph using the node features? The authors should include an empirical comparison or provide theoretical justification for their method. Is the proposed similarity-based rewiring mode optimal?\n\nHow much of the improvement is coming from the fact that the Laplacian proposed by the authors is weighted by the stationary probability of random walks on the graph (or what the authors call the importance of a node)? Can the authors do an ablation study to disentangle this from the commute distance?\n\nThis reviewer was confused by the comment that general GNNs can outperform models tailored for directed graphs with hyper-parameter tuning. In Table 1, DirGNN, a model tailored for directed graphs, outperforms GCN. Can the authors clarify what they mean by this comment?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "This is an interesting paper with a clever insight. It is sensible to say that not all nearest neighbors on a directed graph are created equal. Weighing the features of some neighbors more during the aggregation step of a GNN based on the shorter commute distance of those neighbors to the original node is an intriguing idea. Weighing based on the commute distance certainly sounds reasonable. The proposed method for rewiring a graph to make it irreducible and aperiodic while still retaining sparsity is also clever as is the weighted Laplacian that can be used to efficiently compute the commute distance leveraging its sparsity using methods such as randomized truncated singular value decomposition.\n\nThe state-of-the-art performance achieved using the author's method on some of the most commonly used directed graph data sets, such as Squirrel, Chameleon, etc. is impressive and provides reasonable empirical proof of the validity of the proposed approach. The authors also include solid empirical evidence on running times of their algorithm and convincing comparisons to PageRank for graph rewiring and calculation of commute distances."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors propose a method for weighing the features of the neighbors of a node during aggregation step of GNN based on the commute distance of the neighbor to the node. The commute distance between nodes A and B is the average number of steps that a random walk takes traversing from node A to node B and back to node A. This distance is particularly relevant for directed graphs because although all nearest neighbors are one hop away from a node, their commute distances might vary because of the constraints imposed by the directions of the edges. One neighbor might require a circuitous path along many other nodes before returning to the original node (have longer commute distance) whereas another neighbor could be closer. The authors' key idea is to weight the importance of the features of the neighbors of a node during the aggregation step of GNN based on the commute distance of the node to those neighbors. Besides this weighing, the aggregation and update scheme that they use is based on that of Rossi et al. where the features of the incoming and outgoing neighbors are aggregated separately and used alongside the node's own features during the update step. The authors also propose an efficient way of computing commute distance. To do so, they introduce a weighted Laplacian for directed graphs that accounts both for the directional connectivity of the nodes and their importance (computed as the stationary probability at each node of a random walk on the graph). The authors also introduce a way to rewire a graph to ensure that it is irreducible and aperiodic while keeping the graph sparse (unlike alternative methods such as PageRank). The commute distance can then be efficiently computed using the sparse weighted Laplacian. Finally, the authors show empirically that their proposed approach improves on existing methods when applied to many standard directed graph data sets such as Squirrel and Chameleon."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "To this reviewer, the biggest weakness of the paper was that although weighing neighbors by commute time is sensible, it is not necessarily principled. Is there any reason to a priori expect that neighbors of a node that have shorter commute times to that node somehow contain more relevant features for learning on graphs? This seems to depend on the nature of the learning problem and the data set. Now, the authors can argue that their empirical evidence is sufficient to motivate their approach. However, more should be done here to support the author's proposal. Some evidence is provided in that an adjacency matrix constructed by weighing the neighbors by their commute distance more closely resembles an adjacency matrix constrained to edges that connect nodes within the same class. The authors should expand on this. What does this look like for other data sets? How does aggregation of information using these weighted neighbors look across multiple hops and longer distances across the graph? The authors should have come up with synthetic data sets that can elucidate the mechanism behind the improvement that they are seeing.\n\nIf empirical evidence is the main motivation behind the proposed schemes, the authors could have dome more to build a stronger case. An argument is made in the paper that with the weighing of the neighbors less irrelevant information is aggregated as the GNN models go deeper. The authors should empirically demonstrate this by showing how the performance of their model changes with depth and contrast with existing models. In general, it would have been very interesting to see the impact of the weights proposed in this paper on multi-hop GNN models, such MixHop, Shortest Path Networks, or DRew. In addition, it would have been more convincing if the authors had applied their approach to real world problems of directed graphs in addition to standard benchmark data sets used in Table 1 such as temporal web traffic data, power grids, traffic flow, etc.\n\nThe method proposed in the paper to rewire the graph to ensure irreducible and aperiodic graphs is also ad hoc and not very principled. The method certainly produces a sparse graph unlike PageRank, however, other approaches can also be used to generate irreducible graphs that are sparse such as generating a kNN graph based on the node features. It is not clear that the proposed method is optimal in any way other than that it outperforms the amended probabilities used in PageRank. See Questions below for suggestions on how the authors can evaluate this empirically."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "I have no further questions, though I would recommend that the authors address the previously noted weaknesses."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This paper is engaging and well-written, with a thorough background review that enhances accessibility and readability. Key strengths I noted include:\n\n1. **Novel Approach with Significant Potential:** The proposed method, particularly the newly formulated digraph Laplacian, offers a fresh perspective with substantial potential for future research and applications.\n\n2. **Comprehensive Component Analysis (Section 5.3):** The inclusion of a component analysis strengthens the paper by providing an effective ablation study.\n\n3. **Clear Contribution and Baseline Comparison:** The authors clearly articulate their contributions, outlining the distinctions between their method and existing baselines. They explain where prior approaches fall short and demonstrate how their approach addresses these limitations.\n\n4. **Effective Visual Aids:** Figures 1 and 2 are well-designed and enhance understanding by clarifying details within the method.\n\n5. **Robust Experimental Validation:** he paper validates its approach across a wide variety of datasets and multiple baseline comparisons, highlighting the robustness and generalizability of the proposed method.\n\n6. **Reproducibility:** The authors provide code for reproducing their experiments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors present a novel approach that integrates node-wise commute time into a message-passing framework, introducing the Commute Graph Neural Network (CGNN). The central contribution of CGNN is the development of a new directed graph Laplacian, designed to address path asymmetry in directed graphs. The authors demonstrate that CGNN outperforms existing baselines across most datasets and effectively motivates the significance of the problem they address.\n\nOverall, I found the paper well-executed and recommend it for acceptance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Overall, this paper is strong in its methodology and results, though I have a few recommendations that could enhance its clarity and depth.\n\n1. **Graph Density in Rewiring Approach:** While I appreciate that the authors provided commute times before and after rewiring in Table 3, it would be beneficial to also examine how rewiring affects graph density. This additional metric could offer deeper insights into structural changes post-rewiring.\n\n2. **Unobserved Edges in Definition of $m_{i,in}^{(l)}$ and $m_{i,out}^{(l)}$:** Given that unobserved edges are introduced to the graph, I suggest adjusting the definitions of $m_{i,in}^{(l)}$ and $m_{i,out}^{(l)}$ to account for these edges, potentially assigning them a lower weight than observed edges. This adjustment could yield a more realistic representation of edge significance.\n\n3. **Model Complexity:** The model’s complexity is relatively high, even though it’s reported to be on par with other GNN models. This complexity, particularly in precomputation, might be a barrier in some cases. However, I do not consider this a critical issue, as future work could address and optimize this aspect.\n\n4. Inclusion of Synthetic Datasets: While the paper impressively covers a range of empirical datasets, the addition of synthetic datasets could improve interpretability. By embedding known patterns, synthetic data could highlight the model's strengths and limitations in detecting specific features.\n\n5. Reordering Related Work: Placing the Related Work section (currently Section 6) closer to the beginning would make the reading experience smoother, giving readers essential context before diving into the methodology and results.\n\nThese revisions would, in my opinion, strengthen the paper without diminishing its core contributions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1) What do you mean by the footnote on page 1?\n2) Why is the proposed Laplacian sparse? From Eq. (5), the matrix $P$ seems to be a complete matrix.\n3) What is the relationship between Eq. (5) and $D^{-2}L$ and why do you should choose Eq. (5)?\n4) Why does the rewiring procedure only minimally alters the overall semantics of the original graph?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1) The idea of considering commute time is novel and reasonable.\n2) The topic of directed graph neural networks is important.\n3) The source code is provided."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes Commute Graph Neural Networks (CGNN) for directed graphs, which is based on a new digraph Laplacian matrix taking into the commute time on a (possibly rewired) strongly connected graph. Theoretical and empirical analysis is provided."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) The paper is poorly written with unsupported claims. For example, the footnote on page 1 is not sensible, and it is not clear why the previous methods are undirectional during shortest path computation.\n2) It is unclear why the proposed Laplacian is sparse. From Eq. (5), the matrix $P$ seems to be a complete matrix.\n3) It is unclear what the relationship is between Eq. (5) and $D^{-2}L$ and why you should choose Eq. (5).\n4) Being strongly connected is too strong an assumption, and it is not clear why the rewiring procedure only minimally alters the overall semantics of the original graph.\n5) [1] mentions flow imbalance in directed graphs and is not discussed. It is also unclear whether the idea in [1] is considered undirectional by the authors.\n6) (minor) Grammar issues: e.g., line 115.5 \"notations. We\" should be \"notations, we\"\n\nReference:\n [1] He, Y., Reinert, G., & Cucuringu, M. (2022, December). DIGRAC: digraph clustering based on flow imbalance. In Learning on Graphs Conference (pp. 21-1). PMLR."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose an approach to integrate commute time into graph neural networks to enhance the analysis of directed graphs, effectively addressing the asymmetry and complex path interactions inherent in these structures."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024commute,\ntitle={Commute Graph Neural Networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3ktyyYGLxB},\nnote={under review}\n}"
},
"abstract": {
"value": "Graph Neural Networks (GNNs) have shown remarkable success in learning from graph-structured data. However, their application to directed graphs (digraphs) presents unique challenges, primarily due to the inherent asymmetry in node relationships. Traditional GNNs are adept at capturing unidirectional relations but fall short in encoding the mutual path dependencies between nodes, such as asymmetrical shortest paths typically found in digraphs. Recognizing this gap, we introduce Commute Graph Neural Networks (CGNN), an approach that seamlessly integrates node-wise commute time into the message passing scheme. The cornerstone of CGNN is an efficient method for computing commute time using a newly formulated digraph Laplacian. Commute time is then integrated into the neighborhood aggregation process, with neighbor contributions weighted according to their respective commute time to the central node in each layer. It enables CGNN to directly capture the mutual, asymmetric relationships in digraphs. Extensive experiments confirm the superior performance of CGNN. Source code of CGNN is anonymously available here."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Graph Neural Networks",
"Message Passing",
"Commute Time",
"Node Classification"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/6bc6f7cbcd52cd57b6b144506252118318511eea.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on graphs and other geometries & topologies"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Commute Graph Neural Networks"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3l6PwssLNY | CR2PQ: Continuous Relative Rotary Positional Query for Dense Visual Representation Learning | main | Active | Self-supervised learning;Distillation | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 5;5;5;6 | 3;2;3;4 | 2;2;3;3 | 2;3;2;2 | 2;3;2;2 | 5.25 | 3 | 2.5 | 2.25 | 2.25 | 0.816497 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please refer to the weaknesses above."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Writing quality is good. The paper is well-structured, and clearly written.\n2. SOTA performance. The paper demonstrates the state-of-the-art performance on mainstream detection and segmentation datasets, such as COCO and ADE20K, which is impressive.\n3. Versatility of the method. The paper shows the simplicity of CR2PQ, which can be easily integrated into a variety of popular representation learning frameworks, such as mask-based learning, contrastive learning, and distillation methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces the Continuous Relative Rotary Positional Query to enhance dense visual contrastive learning by improving pixel/patch correspondence across different views. It addresses limitations in existing self-contrasting methods by transforming discrete positional embeddings into continuous representations. The proposed CR2PQ enables more effective patch-level representation learning, achieving state-of-the-art results and faster convergence in detection and segmentation tasks on the COCO dataset."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Reliance on random cropping. Although random cropping can increase the variability of the input, its results may still be limited by the randomness of the cropping. In extreme cases, it may result in almost no overlap between the generated views, affecting the learning effect of the model.\n2. Computational complexity. Complex matrix operations are required when calculating relative position embedding and rotating embedding, which increases the burden in scenarios with limited computing power.\n\nP.S. There is an error in Figure 1. [CLS] should be global information, while patch is local information."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "-Why use a pretraining network for the teacher? You are comparing with other baselines which some of which learn everything from scratch. This seems like a logical thing to try, have you tried that?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "-The empirical results are good and outperform previous SOTA.\n\n-I think this paper can be worthwhile to accept, I'm willing to improve my score based on the author's reply."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose a distillation technique where a student is densely trained to match teacher features. The novelty comes from using 2D RoPE in the network as well as a cross-attention module with relative positional information. They show good empirical results on detection and segmentation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "-L142: Relative positional encoding = RoPE?\n\n-L161: W_{pos} v.s. P_{pos} ?\n\n-The notation in equation 1 is confusing. It is as if the patches don’t interact with each other. I would use a new variable to define a patch representation. Also if f_\\theta denotes the ViT, why does it take z as input, which already contains the linear layer on the left side of the equation but not on the right side. I think the notation should be made more precise.\n\n-Equation 2 has some n and m mixed.\n\n-L219: “we set each patch size of the view A as 1”, but in L227 p_A (the patch size) is defined?\n\n-L228: There is a sentence “Since we set each grid size of the anchor view as 1.” What is that supposed to mean?\n\n-L297: If I’m not mistaken, the definition of q doesn’t make sense.\n\n-The first stated contribution is using 2D RoPE for SSL based methods. Then, in L358, shoud state “We also evaluate the detection and segmentation without pretraining i.e. directly using 2D RoPE”. First, that entry is only in Table 1 and not Table 2. Second, I think you should also independently show empirical evidence of your 2 first contributions (2D RoPE and cross-attention module) and report results for that.\n\n-In general, I think the paper could be more explicitaly precise with how sizes/positions are encoded e.g. is it relative to the original image input grid or relative to the crop?\n\n\n\nMinor:\n\n-L082: “as the downstream task only input”\n\n-If I’m not mistaken, there is a problem with sentence at L203 starting with i.e."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. What is the performance of the CR2PQ backbone performance on some strong detectors, such as DINO or Co-DETR?\n\n2. CR2PQ requires the teacher model to provide contrastive pairs, however, the performance does not improve as the model becomes larger (ViT-L vs ResNet50). The reviewer wonders about the performance of a larger model for the student. Does this approach work for a larger backbone as a student, such as ViT-L/ViT-G? The authors are suggested to validate the scalability of the method.\n\n3. Some small mistakes\n\n- The font of the paper is different from other papers. Should it be correct? \n\n- line 274, there is an overlap between the table and the caption."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. CR2PQ introduces a pioneering method for dense visual representation learning by utilizing continuous relative rotary positional embeddings, which is a significant departure from traditional discrete embeddings.\n\n2. The method achieves state-of-the-art results across various benchmarks, including object detection and segmentation tasks on COCO and semantic segmentation on ADE20K, outperforming previous leading methods by a considerable margin.\n\n3. The introduction of a positional-aware cross attention module enhances the learning of semantic information without incurring significant additional computational costs. CR2PQ's use of rotary positional embeddings makes it robust to various view augmentations, including random cropping, which is a common challenge in contrastive learning methods.\n\n4. The paper supports the method's strengths through extensive experiments and ablation studies, providing a thorough analysis of CR2PQ's performance under different conditions and configurations."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "1. The paper introduces Continuous Relative Rotary Positional Query (CR2PQ), a novel method for dense visual representation learning.\nCR2PQ addresses the challenge of establishing pixel/patch correspondence across different views in dense contrastive learning (DRL) by transforming discrete positional embeddings to continuous representations.\n\n2. It utilizes a rotary positional embedding to represent the relative positions between two views and reconstructs the latent representations of one view from another through a rotary positional query.\n\n3. The method simplifies the dense contrastive learning paradigm by making it correspondence-free and integrates easily into various representation learning frameworks.\n\n4. Extensive experiments on standard datasets demonstrate state-of-the-art (SOTA) results, outperforming the previous SOTA method (PQCL) significantly in detection and segmentation tasks on COCO with improved mAP scores."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Experiments. The author should provide more scales of backbone to validate the scalability of the method. Most experiments are conducted on ViT-S. The reviewer understands the efficiency of the experiments, however, there should be some experiments on larger backbones."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- In Table 1, what does the row \"RoPE\" exactly correspond to? A ViT-S/16 equipped with rotary positional embedding, randomly initialized and finetuned on the downstream task?\n\n- In Table 4, what does the row \"EMA update (Contrastive)\" exactly correspond to? Is the teacher randomly initialized?\n\n- At line 219. it is mentioned that the patch size of view A is set to 1, but then it is set to $p_{A}$. Can you clarify this?\n\n- At line 227: I suggest using another notation for $p_{A}$ as the patch size, as it is confusing."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The proposed self-supervised framework for dense visual representation learning is novel.\n- The method elegantly eliminates the need to establish explicit correspondence between local features across views by leveraging relative positional cues.\n- The performance on dense downstream tasks is thoroughly evaluated, showing faster convergence and achieving state-of-the-art results on standard benchmarks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a novel self-supervised framework for dense visual representation learning, which avoids the need for explicit dense correspondences between local features across views. Instead, the framework reframes the task as predicting local representations from one view to another, guided by relative positional cues. It integrates rotary positional embeddings within the student model and distills knowledge from a pre-trained, frozen teacher model. This approach yields faster convergence and improved performance on standard benchmark evaluations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The method differs from existing baselines in three key ways: (1) the use of rotary positional embeddings, (2) the use of a pre-trained, frozen teacher model, and (3) the proposed pretext task. This makes it challenging to assess the contribution of each component to the overall performance. Specifically, the fairness of the experimental setup is questionable, as other methods are trained from scratch while CR2PQ benefits from a pre-trained teacher. More ablation studies are needed to separate the impact of each element.\n\n- Overall, the writing is difficult to follow, with multiple notation inconsistencies, typos, and signs of negative vertical spacing used to fit within the page limit.\n\n- Equation 1 is misleading/incorrect as it suggests that the representation of a single patch is independent of its context.\n- Equation 2: The angle of the key seems incorrect.\n- Line 210: The image dimensions are inconsistent with line 157.\n- Line 214: Inconsistent use of $\\mathbf{p}{a}$ and $\\mathbf{p}{A}$.\n- Line 234: The notation is inconsistent with the left side of Equation 3.\n- Table 1: Framwork $\\rightarrow$ framework.\n- Figure 1: There seem to be inconsistencies in the notations used within the figure and also with respect to the method section.\n- \"pertaining\" $\\rightarrow$ \"pretraining\"/\"pre-training\" (11 occurrences).\n- Line 86: exhausted $\\rightarrow$ exhaustive.\n- Line 161: $\\mathbf{W}{pos}$ $\\rightarrow$ $\\mathbf{P}^{i}{pos}$."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024crpq,\ntitle={{CR}2{PQ}: Continuous Relative Rotary Positional Query for Dense Visual Representation Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3l6PwssLNY},\nnote={under review}\n}"
},
"abstract": {
"value": "Dense visual contrastive learning (DRL) shows promise for learning localized information in dense prediction tasks, but struggles with establishing pixel/patch correspondence across different views (cross-contrasting). Existing methods primarily rely on self-contrasting the same view with variations, limiting input variance and hindering downstream performance. This paper delves into the mechanisms of self-contrasting and cross-contrasting, identifying the crux of the issue: transforming discrete positional embeddings to continuous representations. To address the correspondence problem, we propose a Continuous Relative Rotary Positional Query ({\\mname}), enabling patch-level representation learning. Our extensive experiments on standard datasets demonstrate state-of-the-art (SOTA) results. Compared to the previous SOTA method (PQCL), our approach achieves significant improvements on COCO: with 300 epochs of pretraining, {\\mname} obtains \\textbf{3.4\\%} mAP$^{bb}$ and \\textbf{2.1\\%} mAP$^{mk}$ improvements for detection and segmentation tasks, respectively. Furthermore, {\\mname} exhibits faster convergence, achieving \\textbf{10.4\\%} mAP$^{bb}$ and \\textbf{7.9\\%} mAP$^{mk}$ improvements over SOTA with just 40 epochs of pretraining."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Self-supervised learning",
"Distillation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/27b268506d9ce023797408062676ad8dc9be0dd5.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "CR2PQ: Continuous Relative Rotary Positional Query for Dense Visual Representation Learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3l9NRfezlo | DFL$^2$G: Dynamic Agnostic Federated Learning with Learngene | main | Active | Federated Learning;Low-cost Communication;Learngene | alignment, fairness, safety, privacy, and societal considerations | 3;3;5;5 | 5;4;3;4 | 1;2;3;3 | 2;2;3;3 | 1;2;2;2 | 4 | 4 | 2.25 | 2.5 | 1.75 | -0.707107 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. I believe that the \"cef\" measure in Table 1 doesn't provide a fair comparison, as there is no direct relation between communication cost and accuracy. \n2. It would be nice to see more experimental support, including diverse datasets and non-IID scenarios with different data heterogeneity levels (α = 0.05, 0.5, 0.1).\n3. Also the authors should consider to include one or two standard FL baseline like SCAFFOLD, FedProto, FedTGP, to better demonstrate method's superiority."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This paper proposes an innovative approach for federated learning, which dynamically initializes effective parameters for new clients and utilizes Learngene concept to reduce communication overhead and strengthen privacy.\n2. The results show that the performance of the proposed method is comparable with the baselines.\n3. The paper is well-structured."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This manuscript proposes a framework, called DFL2G, to address two main challenges in federated learning: (1) initialization of the client model parameters for new \"agnostic\" clients and (2) to reduce communication overhead between clients and server during training process. The framework consists of three modules: Learngene Smooth Learning, Learngene Dynamic Aggregation, and Learngene Initial Agnostic Model, to effectively address these challenges. Experimental results demonstrate that the approach effectively reduces communication cost while maintaining comparative classification accuracy."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Lack of convergence proof and theoretical support.\n2. The experimental results are limited. Further the authors have not considered different heterogeneous settings in their experiments.\n3. There is no comparison with the baselines having similar objectives (e.g., FedProto, FedTGP)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please refer to weaknesses"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. **Seemingly Effective Reduction of Communication Costs:**\n \n D2FL seemingly lowers communication overhead in federated learning where instead of transmitting full model updates, local updates are compressed into lightweight \"learngenes,\" which are then shared with the server. For a fixed communication budget, the tradeoff is improved. This is shown in experimental work\n \n2. **Efficient Initialization of Agnostic Client Models:**\n \n The framework leverages accumulated knowledge from participating clients to generate and store learngenes in a central pool. When new or agnostic clients join the network, they can initialize their models by inheriting these learngenes, facilitating rapid and effective model initialization. \n \n3. **Improved Privacy Preservation:**\n \nBy avoiding the direct sharing of global models and instead using condensed learngenes, D2FL offers improved safety against standard gradient attacks unlike FedAvg. The authors also highlight that the \"privacy\" means defense against gradient based attacks only."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors introduce **D2FL**, a novel method designed to address the challenge of initializing local models for agnostic clients in federated learning without necessitating the sharing of a global model. Leveraging the **Learngene paradigm**, D2FL focuses on the rapid initialization of agnostic models through the use of \"learngenes.\" These learngenes encapsulate essential model knowledge, allowing new or agnostic clients to initialize their local models efficiently by inheriting this distilled information. The primary claims of D2FL include reduced communication overhead and enhanced privacy compared to the standard Federated Averaging (FedAvg) approach. By minimizing the need to transmit large model updates and avoiding the distribution of a global model, D2FL aims to achieve more scalable and privacy-preserving federated learning."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Ambiguous Notation for Agnostic Clients:**\n \n The notation used to represent agnostic clients, particularly in lines 128-129, is unclear. \n \n2. **Scalability Concerns Due to Server-Side Storage Overhead:**\n \n The server maintains K cluster models, which introduces significant storage overhead. As the number of clusters increases, the storage requirements may become prohibitive, raising concerns about the scalability of D2FL in large-scale federated learning environments. This limitation is not adequately addressed or acknowledged in the paper. This is especially relevant when comparing with other baselines\n \n \n3. **Insufficient Explanation of the Likelihood Function for FIM Computation:**\n \n The **Fisher Information Matrix (FIM)** is utilized within the framework, but the paper does not explicitly explain the likelihood function used to compute it 202-203. \n \n4. **Complexity of the Learngene Concept:**\n \n As there are multiple procedures happening in the paper, the introduction and explanation of the Learngene concept are convoluted, making the paper difficult to follow. It required multiple reading to understand some concepts. The authors should simplify the presentation of this concept, possibly by providing more intuitive explanations or systematically develop concepts to improve comprehension.\n \n5. **Unclear Combined Loss Function:**\n \n In line 230, the paper presents a combined loss function where the same weight parameter λ controls multiple aspects of the loss. The interaction and impact of λ on different loss components are not clearly delineated. Also the ablation studies do not incorporate the impact of the hyper parameter adjustment of these seperate learngene and elastic gene loss functins\n \n\n\n 6. **Ambiguities in Experimental Figures and Tables:**\n \n **Figure 4:** The dataset and model used in this figure are not clearly specified. Additionally, the performance of D2FL in low epoch regions (e.g., epochs less than 10) is smaller than some baselines other methods that perform better under these conditions. This needs to be acknowledged.\n \n **Table 4:** The table does not include standard deviations. Furthermore, it fails to separately evaluate the impact of elasticity and the Learngene component, despite elasticity being a core component of the paper. Same hyper parameter controls both the loss function so it is difficult to establish the impact of these seperate loss functions. This omission makes it challenging to determine the individual contributions of each component to the overall performance.\n \n **Table 5:** Similar to Table 4, Table 5 lacks descriptive information about the datasets used and the statistical measures reported.\n\n7. **Absence of Theoretical Convergence Guarantees:**\n \n The paper does not provide any theoretical analysis or proofs to support the convergence of the Learngene-based initialization method."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "- The paper proposes \"collaborating, condensing, initializing\" steps analogous to the Learngene paradigm. \n- The topic of dynamic agnostic federated learning is important. \n- The provided empirical results cover various settings and baseline methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies dynamic agnostic federated learning, specifically on initializing the client models (by using the learngene paradigm) and achieving better communication overhead while protecting the privacy of the models. They propose DFL$^2$G, which consists of smooth updating, dynamic aggregation, and initial agnostic model."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- **Readability**: \n - There are many mistakes, both in the text and notations, creating obstacles for the reader. \n - $\\mathcal{X}_{k,i}$: why do you need $k$ here? The local datasets $\\mathcal{X}_i$ are not being clustered. \n - Eq.8: why multiplier? \n - [Line 240]: $\\sum_{l=1}^L \\xi_{k,i}^{(l)} = 1$. How does this sum up to 1? It does not seem to be valid. \n - [Line 229]: Overall, you have the following objective function: \n\\begin{equation}\n\\mathcal{L}\\_{all} = \\lambda \\mathcal{L}\\_{gen} + \\lambda \\mathcal{L}\\_{elg},\n\\end{equation}\nwhich gives \n\\begin{equation}\n= \\lambda \\mathcal{L}\\_{cls} (\\mathcal{X}\\_{k,i}) + \\lambda^2 \\|\\| \\theta\\_{k,i} - \\Theta\\_{k} \\|\\|_2 + \\lambda^2 \\|\\| \\theta\\_{k,i}^{'} - \\Theta_k ||_2, \n\\end{equation}\nand it has issues in the formulation. \n - Typos in lines: 198, 199, 201, 226 (what is the second loss function?), 243 (different subscripts), 272, 283 (why j? you can stick to k.), 313, etc. \n\n- Section 2.4. Problems in the SVD decomposition and formulation. How can you set the data dimension $d$ to 5? $d$ can not equal some other value than its original value. \n\n- Privacy analysis. For a fair comparison with other baseline methods, you need to leverage all available information to reconstruct the samples $\\mathcal{X}_i$. Since clients are sharing $V_i$'s with the server, which can aid your reconstruction objective you have (Eq. 12), using the iDLG objective solely is not fair; therefore, it raises a question regarding the results in the paper (Figure 5). \n\n- The number of local epochs is huge (line 335, local epochs = 10), which should not be the case in heterogeneous FL since it makes the clients overfit to their local data. \n\n- The proposal of a new metric. Why propose a metric if you use it only in one table (Table 1)? Also, it is better to see the Acc. measures in Table 1. \n\n- Performance curve comparison (Figure 4). The figure doesn't correspond to what is reported in the table, which questions the study's validity. Also, the proposed method has a high variance (deviation) compared to other methods, which doesn't necessarily mean the method outperforms others. The baseline methods do not improve, having a straight-line performance (FedLP, Flearngene). \n\n- Table captions should be on top. \n- Consider citing other works using \\citep{}."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "How you update the cluster was not specific in algorithm 1. As a new agnostic client join the network, it is added to the nearest cluster as stated in line 18 of Algorithm 1. However, as new clients involve the cluster should be updated. Or is it the cluster only built at the beginning?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper presents an innovative solution through the introduction of the Learngene framework. By integrating Learngene into the Dynamic Agnostic Federated Learning paradigm, the authors enable efficient model initialization and communication, particularly for agnostic clients that join the system dynamically.\n\nThe experimental results are compelling, demonstrating a significant reduction in communication costs while maintaining or even enhancing model accuracy. This highlights the framework's ability to improve both scalability and performance in federated learning environments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper is aiming at addressing two key challenges in Federated Learning (FL): \n1) privacy leakage during client-server communication, and \n2) communication overhead in transmitting model updates. \nTo tackle these issues, the authors propose the Learngene framework for Dynamic Agnostic Federated Learning (DAFL). The Learngene framework introduces a mechanism for compressing model updates into learngenes, which capture the most important information while reducing data transmission and mitigating the risk of privacy leakage. Additionally, the framework supports dynamic client participation, allowing clients to join and leave the system flexibly without compromising performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) Assume a one-shot dataset in the client. This assumption allows for efficient clustering and model initialization but may limit the framework’s flexibility in handling the common dataset with more samples. \n2) Lack of Dynamic Cluster Management: The paper does not address how to manage clusters when they become too large or too small. In cases of high data heterogeneity, more clusters are required to accurately represent the diversity among clients. However, the framework does not discuss mechanisms to dynamically adjust the number of clusters based on client performance, data distribution, or scalability concerns. \n3) Insufficient Privacy Guarantees: The paper does not provide strong privacy guarantees. The only implication we have based on your illustration is that \"iDLG cannot recover the feature $X \\in R^d $ given learngene\". \nMoreover, the privacy protection is questionable when considering the specifics of the Singular Value Decomposition used in the framework. your $X_i \\in R^{1\\times d}$, $X_i = U_i \\Sigma_i V_i^T$. $U \\in R^{1\\times1}$, $\\Sigma \\in R^{1\\times d}$ diagonal matrix. Therefore, there are only 2 unknown numbers to recover $X_i$. if ignoring the scale ($U \\in R^{1\\times1}$), there are only one number left to recover your $X_i$, which would be easy. \nBesides, The dimensions are not clearly explained for SVD here. Your $X_i$ should be a matrix $X_i \\in R^{1\\times d}$ \n\n\nPresentation: \n1) Your citation format is incorrect for the entire paper. In latex, most of your citations should be \\citep{}. and will be rendered \"FL (McMahan et al. 2017)\". \n2) Since you still have space, I suggest that your algorithm should be placed in the main body of the paper. Because it provides a more general view of how you integrate Learngene smooth learning, learngene dynamic aggregation, and learngene initial agnostic model into one framework.\n3) your algorithm line 4. The tilde of $\\theta$ is in the wrong place. \n4) #276 your mentioned $d=5$. Does this mean that your private data $X \\in R^d = R^5 $, If so, is this a typo here?"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose DFL$^2$G, a \"Collaborating & Condensing & Initializing\" dynamic federated learning framework inspired by Learngene, aiming to achieve low-cost communication, robust privacy protection and effective initialization of agnostic models."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024dflg,\ntitle={{DFL}\\${\\textasciicircum}2\\$G: Dynamic Agnostic Federated Learning with Learngene},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3l9NRfezlo},\nnote={under review}\n}"
},
"abstract": {
"value": "Dynamic agnostic federated learning is a promising research field where agnostic clients can join the federated system at any time to collaboratively construct machine learning models. The critical challenge is to securely and effectively initializing the models for these agnostic clients, as well as the communication overhead with the server when participating in the training process. Recent research usually utilizes optimized global model for initialization, which can lead to privacy leakage of the training data.\nTo overcome these challenges, inspired by the recently proposed Learngene paradigm, which involves compressing a large-scale ancestral model into meta-information pieces that can initialize various descendant task models, we propose a \\textbf{D}ynamic agnostic \\textbf{F}ederated \\textbf{L}earning with \\textbf{L}earn\\textbf{G}ene framework. The local model achieves smooth updates based on the Fisher information matrix and accumulates general inheritable knowledge through collaborative training. We employ sensitivity analysis of task model gradients to locate meta-information (referred to as \\textit{learngene}) within the model, ensuring robustness across various tasks. Subsequently, these well-trained \\textit{learngenes} are inherited by various agnostic clients for model initialization and interaction with the server. Comprehensive experiments demonstrate the effectiveness of the proposed approach in achieving low-cost communication, robust privacy protection, and effective initialization of models for agnostic clients."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Federated Learning",
"Low-cost Communication",
"Learngene"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/5c3f3b5292f82e612c8385a0db63b455a726e2ea.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "DFL$^2$G: Dynamic Agnostic Federated Learning with Learngene"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3lDxKQepvn | Latent Task-Specific Graph Network Simulators | main | Active | Graph Network Simulators;Graph Neural Networks;Meta-Learning;Neural Processes;Deformable Object Simulation;MeshGraphNets | learning on graphs and other geometries & topologies | 3;5;6;6 | 4;5;4;3 | 2;3;3;3 | 2;3;3;3 | 2;3;3;3 | 5 | 4 | 2.75 | 2.75 | 2.75 | -0.288675 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How does ProDMP generate smooth trajectories based on the predefined conditions of the initial state? Please give detailed justification and explanation.\n\n2. Could the author provide a detailed explanation of how a meta-learning problem can contribute to simulating new scenarios?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper shows a clear motivation for initial state uncertainty and data limitation, which are all critical problems in related research fields.\n\n2. Consider the \"node-level latent features,\" which is, to the best of my knowledge, a novel method for solving such a problem.\n\n3. The results of the new simulation task in the paper are convincing for the proposed method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a graph network simulator for mesh-based simulation on material study. The framework is constructed on a meta-learning problem and applies conditional Neural Processes to address data limitations. This paper shows both qualitative and quantitative experiments."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Some methodology details are unclear, especially in the \"Probabilistic Dynamic Movement Primitives\" section and \"Meta-Learning and Graph Network Simulators.\""
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. The model's architecture is not clearly explained, and it is unclear why certain modules are necessary. For example, from the results, it seems that MGN, even without history information, can surpass M3GN in performance. This raises questions about the value of incorporating historical information in M3GN. Moreover, the experimental results do not clearly demonstrate the necessity or advantages of using a meta-learning scheme. A thorough analysis on how meta-learning benefits model performance would be valuable, including ablation studies comparing model performance with and without meta-learning.\n2. The authors claim that the baseline MGN does not incorporate historical information, which appears inaccurate. In certain datasets, MGN does include history. For a fair comparison, the MGN baseline should also be evaluated with historical data to assess its impact on performance.\n3. The results section only reports the average MSE across all time steps. It would be helpful to provide a comparison of MSE over the number of prediction steps, as this would give insight into the model's performance stability over time as claimed in the paper.\n4. Based on Figure 3, the proposed M3GN method does not appear to use ground truth collider information. If this is the case, does the collider state being predicted by the mode? How accurate is the collider state prediction, especially when history steps are limited? Additionally, including collider ground truth (as in MGN) is actually intuitive and makes sense, as the primary goal of developing a simulation model is to understand how a solid deforms under varying contact forces and obstacle displacements. Predicting these external forces may not be necessary for achieving this objective.\n5. It would be informative to visualize the node-level latent task descriptions learned by the model. Such visualizations could help in understanding how task-specific information is represented.\n6. The datasets used in this paper have relatively small node counts compared to those in previous MGN studies or those used in other related papers. When the number of nodes increases significantly, it is concerned that M3GN may struggle due to the large number of historical steps required. Comparing M3GN’s memory usage with MGN’s would provide a more comprehensive evaluation. \n7. The authors consider each trajectory as a separate task with varying context sizes. However, this approach may not align with the broader goals of meta-learning, as tasks are typically defined by consistent properties such as the same material setting. Currently, the meta-learning setup seems more focused on adapting to different context sizes rather than generalizing across diverse tasks.\n8. As the input context size changes, will the number of predicted steps vary as well? If so, the model’s ability to generalize to different context sizes is unclear, and it may not be as flexible as MGN in this respect. Any experiments or evaluation on this aspect? Additionally, splitting single data points into multiple input-output sets seem to increase the effective amount of training data for M3GN, potentially creating an unfair comparison with MGN which use less training data.\n9. The authors do not specify how material properties are incorporated. Also, it is unclear whether the test data involve material properties that are in-distribution or out-of-distribution relative to the training data. Providing this information is crucial for evaluating the model's generalization capabilities.\n10. The authors mention that material node features are not added to M3GN. Given that these features enhance MGN's performance, it would be useful to understand the rationale for this exclusion and perform related ablation study.\n11. Although the authors mention other methods in related work besides MGN, these methods are not included in the baselines. Some of these methods have better accuracy and efficiency. Including these additional baselines would provide a clearer view of M3GN’s comparative performance.\n12. Will the data used in this study be publicly available? Making the dataset accessible would facilitate further research and replication studies."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper takes a novel approach to enhancing rollout stability by predicting entire future mesh states, and it incorporates a meta-learning scheme to improve adaptability within the simulation framework."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces Movement-primitive Meta-MeshGraphNet (M3GN), a model for simulating object deformations in data-limited scenarios. M3GN combines meta-learning and movement primitives to improve the adaptability and accuracy of Graph Network Simulators (GNSs) by framing mesh-based simulation as a meta-learning task."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While the approach appears novel, the rationale behind certain modules in the model is unclear, and the results do not provide sufficient evidence to justify their inclusion. Also, the paper is not clearly written and sometimes hard to follow. The detailed comments and suggestions are listed below."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. What is the timestep for simulation?\n\n2. A figure illustraing all the relation and symbols of input, output can be added. Fig.3 Right is not information for undertanding the task setting."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This work aims to address two important problems in learning-based simulation:\n\n1. It treats the simulation as a trajectory-level meta-learing problem and use trajectory history as the context to predict future trajectories.\n\n2. It mitigates the problem of error accumulation by using ProDMP to directly predict the full simulation trajectories.\n\nThe paper is well structured and written."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors propose a graph network simulator that combines movement primitives and trajectory-level meta-learning. The network uses the simulation history as the context information to predict the deformation for objects with unknown properties. They also use probabilistic dynamic movement primitives to represent the future trajecteries and directly predicts the full simulation trajectories instead of iteratively predicting the next-step. Experiments show that it outperforms STOA in different simulation tasks. Abalation studies validate the effectivenss of the design choice."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Some descriptions are unclear and some important details are missing.\n(1) in line 242, \"graph edges between the deformable object and collider are added based on physical proximity to model interactions\n between objects.\" what is the physical proximity exactly? Since the deformation mesh node position for the end timestep is unknown, I suppose we cannot use that to compute the distance. Whether this edge creation is done only for known timesteps or if it's updated during prediction?\n\n(2) in line 231, why is the term c_1y_1(t) + c_2y_2(t) only depending on the inital conditions? What is the representation of the pre-computed basis fuction \\phi?\n\n2. More detailed description of the training/val/test split should be added. Specify how trajectories are divided between training, validation, and test sets. What are different between training and test? Clarify if test trajectories involve different objects, material properties, or initial conditions than training trajectories. In the limitation part, it is claimed ''We currently consider each trajectory as a task, and require initial states of this trajectory as a context set during inference.\"\n\n3. Since the method needs a trajectory with simulated states as context, the author better include a runtime comparison between your method (including context computation) and traditional simulators for predicting the same number of future timesteps and discuss the trade-offs between computation time and accuracy compared to traditional simulators."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See above."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Strength:\n1. Adopting meta-learning to deal with dynamic prediction tasks is novel, especially the concept of regarding each tajectory as a new task is interesting. \n\n2. The authors consider past information and the eventual state of the collider as the condition to predict the subsequent movement trajectory, which make the network infer the future from the past rather than remember the dynamic behaviour of a certain material. In addition, predicting the whole rest path by a single forward pass could significantly improve the efficiency, compared with previous Graph-based single timestep prediction."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper first propose a meta-learning framework to efficiently learn generalizable mesh-based dynamic prediction tasks. Different from previous graph neural simulators which predict the state updates in a step-by-step manner, the proposed M3GN targets to predict the whole trajectories by a conditional neural process to effectively diminish the error accumulation issue."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Weakness:\n1. This paper is highly related to the Graph-based Neural Simulators. However, in the related work section, the latest advancements in this field are not included, and most of the work discussed is from 2023 or earlier. This could make the paper appear somewhat outdated. I believe this section could benefit from a more comprehensive overview of the field, especially more works from 2024. Below are two of the latest advancements about Graph Network Simulators that I recommend the authors to discuss them in Section 2.1 ,or better, use them as baselines for comparison. However, given the tight rebuttal timeline, it is also tolerated that concurrent works were not included for comparison.\n\n (1) \"DEL: Discrete Element Learner for Learning 3D Particle Dynamics with Neural Rendering\" 2024 .. This work integrate traditional Newton mechanics into the graph network design to benefit from mechanics priors for longer term prediction.\n\n (2) \"Equivariant graph neural operator for modeling 3d dynamics\" 2024 .. This paper deal with dynamic prediction tasks as trajectory-level rather than next-step level by operator learning, which is somewhat relavent with this reviewing work. Also, it handle the equivariant issues. \n\n2. For Equation 3, does it use past trajectory collider states when encoding z because I saw that you seem to only use the latest state, or does it rely solely on the historical information of the deformed object? I believe it would be more reasonable to use all the historical information of the collider here as well, since the deformation of the mesh is passive. \n\n3. If this method is trained on an elastic dataset, can it generalize directly to elastoplastic materials? I believe it would be worthwhile to discuss the generalization across different materials in the experiments, rather than limiting it to variations in mechanical parameters within the same material. \n\n4. Line 276 mentions that the context information z is concatenated with the node features. Is the same z concatenated to each node?\n\n5. Finally, the neural network predicts a set of weights, and the shape of the weight matrix is 𝑇, 𝐷,3. Which basis functions are these weights applied to in order to obtain the predicted trajectory? Are they precomputed from the historical trajectory? If yes, how?\n\nIn appendix A.2 \"Initially, we integrate a relative goal position as part of the node weights w\" What's the exact mean of the relative goal position? \n\nI will raise the score if most of concerns are well addressed by the authors."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce a latent task-specific Graph Network Simulator, which improves over existing learned simulators by framing mesh-based simulation as a meta-learning problem."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024latent,\ntitle={Latent Task-Specific Graph Network Simulators},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3lDxKQepvn},\nnote={under review}\n}"
},
"abstract": {
"value": "Simulating object deformations is a critical challenge in many scientific domains, with applications ranging from robotics to materials science. \nLearned Graph Network Simulators (GNSs) are an efficient alternative to traditional mesh-based physics simulators. Their speed and inherent differentiability make them particularly well-suited for inverse design problems such as process optimization.\nHowever, these applications typically offer limited available data, making GNSs difficult to use in real-world scenarios. We frame mesh-based simulation as a meta-learning problem and apply conditional Neural Processes to adapt to new simulation scenarios with little data. In addition, we address the problem of error accumulation common in previous step-based methods by combining this approach with movement primitives, allowing efficient predictions of full trajectories. We validate the effectiveness of our approach, called Movement-primitive Meta-MeshGraphNet (M3GN), through a variety of experiments, outperforming state-of-the-art step-based baseline GNSs and step-based meta-learning methods."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Graph Network Simulators",
"Graph Neural Networks",
"Meta-Learning",
"Neural Processes",
"Deformable Object Simulation",
"MeshGraphNets"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/9e4c58013535c2e948cb54b42f34a4300aa2307e.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on graphs and other geometries & topologies"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/8f6093e48c557d36cb1bee23fb67c7c20a50bee8.zip"
},
"title": {
"value": "Latent Task-Specific Graph Network Simulators"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3lH8WT0fhu | ConMix: Contrastive Mixup at Representation Level for Long-tailed Deep Clustering | main | Active | deep clustering;long-tailed deep clustering;unsupervised learning | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 5;5;6 | 3;3;3 | 2;3;3 | 2;2;3 | 2;2;2 | 5.333333 | 3 | 2.666667 | 2.333333 | 2 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. The result of pairwise ConMix shown in Table3 on CIFAR-10 is better than the result of ConMix with M=500 presented in Figure 2.Is this reasonable? In my understanding, the former is equivalent to ConMix with a larger M on CIFAR-10.\n2. Have you conducted additional experiments on balanced models of other methods as ConMix-B to support the opinion about robustness?\nQuestion3. Are there other clustering methods being studied besides k-means?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.\tThe author has conducted comprehensive experiments, compared with multiple clustering algorithms, and prove the effectiveness of ConMix under long-tailed distribution.\n2.\tReasonable theoretical analysis is given to verify that ConMix can implicitly achieve the loss-balance.\n3.\tContributions of different elements in ConMix are studied through extensive experiments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a new method called ConMix for dealing with the long-tailed problem of deep clustering. A major challenge in long-tailed deep clustering is how to deal with class imbalance in a dataset without label information. ConMix solves this problem through an innovative approach to mixed representations in contrastive learning to enhance deep clustering performance in the case of long-tailed distributions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The representation synthesis part is supposed to be represented more intuitively, which may be a little confusing at first reading."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Conduct additional experiments on large and complex datasets like ImageNet to validate the effectiveness and generalization capability of ConMix.\n\nEnhance the discussion on method interpretability by providing more empirical analysis regarding its impacts.\n\nProvide detailed descriptions of the experimental setup and hyperparameter selections to improve transparency and reproducibility of the research."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The introduction of ConMix as a contrastive mixup method specifically designed for long-tailed deep clustering is a notable contribution to the field. The approach is innovative, extending mixup techniques into the realm of unsupervised learning.\n\nThe authors provide a theoretical foundation for their method, demonstrating how it can implicitly balance losses across head and tail classes. This theoretical insight is valuable and adds depth to the paper.\n\nThe evaluations on various benchmark datasets and the assertion of outperforming existing methods lend credibility to the proposed approach. The performance metrics presented seem robust."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a novel method, ConMix, aimed at addressing the challenges of long-tailed distributions in deep clustering. The authors argue that existing deep clustering approaches typically assume balanced class distributions, which is not the case in many real-world datasets. ConMix leverages a contrastive mixup strategy to enhance representation learning, theoretically proving its effectiveness in rebalancing class losses without the need for label information. The method is evaluated on benchmark datasets, demonstrating superior performance over existing state-of-the-art approaches."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Diversity of Datasets: The experiments are limited to a few benchmark datasets, lacking validation of the method’s effectiveness on more complex and diverse datasets. It is recommended to conduct experiments on larger image classification datasets such as ImageNet to thoroughly evaluate the model’s generalization ability and practicality.\n\nInterpretability of the Method: Although theoretical proofs are provided, the interpretability of how ConMix specifically affects the model learning process remains insufficient. Consider adding comparative experiments to illustrate the specific impacts of ConMix under varying conditions (e.g., different long-tail ratios) to enhance the depth of the paper.\n\nDetails of Experimental Setup: The experimental section lacks detailed descriptions of hyperparameter choices and training specifics, which could affect the reproducibility of results. It is suggested to include these details in the methodology section to assist other researchers in understanding and replicating the experiments."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Could the authors elaborate on the statement in Line 220: “…. Equivalent to implicitly sampling different weights from the beta distribution”. Do the authors refer to the mixing coefficient in the original mixup formulation? While the reviewer understands that the cardinality of the set U_m follows a beta distribution, the final mixup representation will be the mean representations of these samples, which is different from the mixing coefficients. \n\nFurther, could the authors comment on the performance of the proposed approach in the balanced setting and on the results in Table 3 with regards to the pairwise mixup and the variability of the results?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Limited work has been done on considering imbalance in deep clustering, with most approaches adopting a balanced assumption, and approaches addressing this shortcomings are thus of significance to the community.\n\nWhile mixup on a representations has previously been integrated into contrastive learning (also mixing multiple samples), there is a certain novelty of leveraging this in the clustering setting to address class imbalance, which is further supported by the theoretical analysis.\n\nThe proposed approach is simple and appears to be effective in the settings considered in this work."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes to leverage mixup to improve deep clustering approaches for imbalanced datasets. In particular, a multi-sample mixup is incorporated into the SimCLR loss. Instead of just contrasting two augmentations of the same sample, a random subset of samples are selected and the mean representations of their two augmentations are contrasted. A theoretical analysis is performed that shows, under a certain set of simplifications, that this procedure increases the loss of the underrepresented classes. Further, empirical evaluation demonstrate that the scheme can outperform alternative approaches in the imbalanced setting."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While the author’s main focus is on the imbalanced setting, it would be beneficial to also include comparisons in a balanced setting to be able to judge the overall ability of the method.\n\nThe overall clarity of Section 3.3. can be improved. How are the “stochastically assigned tags” selected? Does each sample have a certain probability of being included (independent of each other)? If that is the case, what is the probability set to? Also, in line 215, the notation of the cardinality of the set is not aligned with Eq. 3.\n\nThe comparison to pairwise ConMix (standard manifold mixup) in Table 3 is not clear. It appears that the pairwise mixup obtains equivalent results to ConMix w/o SDCLR warmup and it is unclear if pairwise ConMix leverages SDCLR warmup here. Also, is this pairwise ConMix directly leveraging the mean representation of the pair or do the authors create a convex combination with weights sampled from a beta distribution? \n\nAs deep clustering methods tend to be a bit less stable than supervised models, some measures of variability/statistical significance would be beneficial in Table 3."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We innovatively propose a ConMix method that can effectively address long-tailed deep clustering."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024conmix,\ntitle={ConMix: Contrastive Mixup at Representation Level for Long-tailed Deep Clustering},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3lH8WT0fhu},\nnote={under review}\n}"
},
"abstract": {
"value": "Deep clustering has made remarkable progress in recent years. However, most existing deep clustering methods assume that distributions of different clusters are balanced or roughly balanced, which are not consistent with the common long-tailed distributions in reality. In nature, the datasets often follow long-tailed distributions, leading to biased models being trained with significant performance drop. Despite the widespread proposal of many long-tailed learning approaches with supervision information, research on long-tailed deep clustering remains almost uncharted. Unaware of the data distribution and sample labels, long-tailed deep clustering is highly challenging. To tackle this problem, we propose a novel contrastive mixup method for long-tailed deep clustering, named ConMix. The proposed method makes innovations to mixup representations in contrastive learning to enhance deep clustering in long-tailed scenarios. Neural networks trained with ConMix can learn more discriminative representations, thus achieve better long-tailed deep clustering performance. We theoretically prove that ConMix works through re-balancing loss for classes with different long-tailed degree. We evaluate our method on widely used benckmark datasets with different imbalance ratios, suggesting it outperforms many state-of-the-art deep clustering approaches. The code has been submitted to the supplementary file."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"deep clustering",
"long-tailed deep clustering",
"unsupervised learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/469504b6d02f88e296ac2ca87feb399da7390ddf.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/bcc5091ad7aa2f2f6737e4206dba246c391f7bc2.zip"
},
"title": {
"value": "ConMix: Contrastive Mixup at Representation Level for Long-tailed Deep Clustering"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3lXZjsir0e | Sample Efficient Robust Offline Self-Play for Model-based Reinforcement Learning | main | Active | robust Markov games;self-play;distribution shift;model uncertainty;reinforcement learning | learning theory | 5;5;5;6;6 | 3;4;3;3;2 | 3;2;3;3;3 | 2;2;2;3;3 | 3;2;3;2;3 | 5.4 | 3 | 2.8 | 2.4 | 2.6 | -0.645497 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "None"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How does the algorithm's performance vary with different types of divergence functions beyond total variation, such as Kullback-Leibler divergence?\n\n2. Would the RTZ-VI-LCB framework be adaptable to handle more complex multi-agent settings with more than two players?\n\n3. How sensitive is the model’s performance to variations in the clipping parameter $C_r^*$, and what guidelines can be provided for choosing this parameter effectively?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper provides a rigorous theoretical framework, including upper and lower sample complexity bounds, which supports the robustness and efficiency claims of the RTZ-VI-LCB algorithm.\n\n2. The design of RTZ-VI-LCB is explained in a step-by-step manner, making it easy to follow the rationale behind each component, such as the use of lower confidence bounds and two-player-wise rectangularity.\n\n3. The paper adapts the robust Bellman equations specifically for two-player games, enhancing the clarity and relevance of the methodology in the context of MARL."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses robust multi-agent reinforcement learning (MARL) in two-player zero-sum Markov games (TZMGs) by introducing the RTZ-VI-LCB algorithm, a sample-efficient approach to handle offline settings with environmental uncertainties. The algorithm improves robustness by applying value iteration with data-driven penalties and establishing sample complexity bounds without requiring full state-action space coverage."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper assumes that both players in the two-player zero-sum game have identical uncertainty sets (same divergence function for both players). This simplifies the model but may limit its applicability to real-world scenarios where players could have different levels of uncertainty.\n\n2. The penalty term introduced in the RTZ-VI-LCB algorithm is crucial for the robust value estimation, but the paper does not clearly explain how the penalty is calibrated or how different choices of penalty function influence the algorithm’s performance.\n\n3. The paper assumes that historical datasets can be treated as independent samples after applying subsampling techniques, but it does not fully address the potential temporal dependencies within offline data."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Why target two-player zero-sum games? Is there any special structure that helps the results, which hinders the authors from considering more general general-sum multi-agent games?\n\nOther minors:\n1) For presentation, as actually the max-player and min-player enjoys very similar formulation, algorithm update rules, and others, the presentation is a little bit redundant. It will be better to only write one time of them, such as equations 8(a), 8(b) can be represented as one if we let the min-player's everything be its negative version. The same for equation 9, 10, 18, the two terms in 22, and etc."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This is the first work that targets offline settings for robust MARL problems, which is an interesting topic.\n2. It provides both upper and lower bounds for understanding this problem."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work focuses on developing provable algorithm for distributionally robust multi-agent reinforcement learning in the face of environmental shift, in offline setting using only a history dataset. Considering two-player zero-sum games, it proposes RTZ-VI-LCB with an upper bound and a lower bound for this problem."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The writing and presentation need to be revised a lot. A lot of parts of the paper are similar to prior art. For instance, the two-fold sampling method in Algorithm 1 is almost the same as Algorithm 3 in [1]. Although cited the prior works, the algorithm needs to be rewritten entirely.\n2. The contributions are a little bit overclaimed from the reviewer's viewpoint. In line 104, this work claims that \"To the best of our knowledge, this is the first time optimal dependency on actions {A, B} has been achieved\". While the concentrability coefficient also involves potential terms of A and B. So it is better to also say this is only for offline settings.\n3. Some writing issues such as in line 107. The \"transition kernel\" does not need to be solved, it seems to need to be revised to \"RTZMG\". In line 113, \"across a range of uncertainty levels\", it seems there is something missing in this half sentence.\n4. In the discussion part after showing the theorems, the reviewer highly suggests that the author check the claims again. For instance, in line 511-512, it seems the upper bound and lower bound do not match in $H$ even if $\\min\\\\{\\sigma^+, \\sigma^- \\\\} \\geq \\frac{1}{H}$. The upper bound has $O(H^5)$, while the lower bound has $O(H^4)$? So it is not optimal yet, which is also claimed in the second paragraph of the discussion.\n\n[1] Li, Gen, et al. \"Settling the sample complexity of model-based offline reinforcement learning.\" The Annals of Statistics 52.1 (2024): 233-260."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. In remark 1, the paper mentions that the coefficient $C_r^\\star$ could be $\\frac{AB}{A + B}$. Given that the sample complexity result is $\\tilde{O}\\left(\\frac{C_r^*(A + B)}{\\epsilon^2}\\right)$, does this imply that the complexity is reduced to $\\tilde{O}\\left(\\frac{A B}{\\epsilon^2}\\right)$ in terms of $A$ and $B$, which is the same as the result in DR-NVI? \n2. How should we compare the term $\\min\\left(f(\\sigma^+,\\sigma^-),H\\right)$ in the upper bound and the term $\\min\\left(1/\\min(\\sigma^+,\\sigma^-),H\\right)$ in the lower bound? Additional discussion on this comparison would clarify the practical implications of the upper bound's tightness relative to the lower bound.\n3. From the similarity between the lower bounds of RTZMGs and RZMGs (Shi et al., 2024b), I assume that RTZMGs are not significantly easier than RZMGs. Given this, is it feasible to extend the RTZ-VI-LCB algorithm to more than two players?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is well-written and easy to follow. The problem of efficiently finding the robust NE in RTZMGs is significant for the field. The theoretical results are strong, as the sample complexity of the RTZ-VI-LCB algorithm nearly matches the lower bound for this problem class. Additionally, the lower bound analysis indicates that RTZMGs are not notably easier than traditional TZMGs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposed an algorithm RTZ-VI-LCB, designed to efficiently find the robust Nash Equilibrium(NE) in Robust Two-player Zero-sum Markov Games (RTZMGs). The authors employ confidence bounds innovatively in the algorithm, enabling it to achieve a sample complexity close to the lower bound except for the order of the horizon."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There are still some technical problems to justify, which will be discussed in the Questions section."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Can you please clarify the new technical ideas in the proof as compared to the work of Blanchet et al and She et al?\n\nCan you please clarify the relationship of the algorithmic ideas to prior work, highlighting which components are natural extensions and which represent new algorithmic ideas?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The question of robust learning in strategic multi-agent settings has received considerable attention in recent years. This paper builds on recent work, providing a clear contribution by combining technical and algorithmic ideas from recent work to provide tighter and more general results than the state-of-the-art. \n\nAdditionally, the work provides interesting lower bounds, whose proofs provide important new insight for the area."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies robust two-player zero-sum Markov games. Recent papers have provided near optimal sample complexity bounds in this setting under partial and limited coverage of historical data individually, but cannot handle both settings simultaneously. This work provides an algorithm that can achieve near-optimal sample complexity under partial and limited coverage simultaneously while also providing information theoretic lower bounds."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The algorithmic novelty in the work is not clear. A core component of the algorithm is a natural extension of work by Li et al the setting in this paper. From my read, the key algorithmic novelty is primarily in the penalty term. \n\nThe technical novelty of the paper is not clearly presented. It seems that key components of the proof follow recent work (e.g. much of the work in step 1 of the proof follows closely the approach of Shi et al.). That said, there are certainly new ideas in the proof, it is just that the paper does not do a good job of highlighting the new techniques in the analysis. \n\nNumerical comparisons to the work of Blanchet et al and Shi et al are not included. Such comparisons would increase the potential impact of the paper and highlight the extent to which the improvement in theoretical bounds represent empirical improvements."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Based on the discussion of the paper's strengths and weaknesses, I have the following questions for the authors:\n\n1. The authors focus on the finite-horizon setting. Can the methodology presented in the paper be extended to analyze the infinite-horizon setting, as in [1]? Additionally, why did the authors choose to focus on the finite-horizon case rather than the infinite-horizon scenario?\n\n2. The algorithmic framework follows from [1]. What are the specific technical challenges in extending the techniques of [1] from standard zero-sum Markov games to robust zero-sum Markov games?\n\n\n [1] Yan Y, Li G, Chen Y, et al. Model-based reinforcement learning is minimax-optimal for offline zero-sum markov games[J]. arXiv preprint arXiv:2206.04044, 2022.\n\n [2]Shi, Laixi, and Yuejie Chi. \"Distributionally robust model-based offline reinforcement learning with near-optimal sample complexity.\" Journal of Machine Learning Research 25.200 (2024): 1-91.\n\n [3]Li G, Shi L, Chen Y, et al. Settling the sample complexity of model-based offline reinforcement learning[J]. The Annals of Statistics, 2024, 52(1): 233-260."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The primary strengths of this paper can be summarized in the following two aspects:\n\n1. This paper introduces the first algorithm that achieves optimal sample complexity with respect to the dependence on action spaces. \n2. The paper offers a comprehensive analysis of robust tabular zero-sum Markov games, presenting both upper and lower bounds on the sample complexity."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a robust model-based algorithm for offline two-player zero-sum Markov games (RTZMGs), effectively addressing the challenges of learning under partial coverage and environmental uncertainty. The key contributions of the paper are as follows:\n\n- The authors introduce the robust tabular zero-sum Markov game framework by extending the standard tabular zero-sum Markov game to a robust setting. Under this framework, they propose a new algorithm, RTZ-VI-LCB, which integrates robust value iteration with a data-informed penalty term to estimate robust Nash equilibria.\n- The authors provide a finite-sample complexity analysis for RTZ-VI-LCB, demonstrating its optimal dependency on the number of actions. This represents the first set of optimal sample complexity bounds for RTZMGs.\n- The authors establish a lower bound on the sample complexity for learning RTZMGs, confirming the tightness of their upper bound and demonstrating the near-optimality of RTZ-VI-LCB across varying levels of uncertainty."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There are two major weaknesses from my perspective:\n\n1. The authors do not discuss whether a Nash equilibrium exists under their definition of the robust zero-sum Markov game. It is well known that in robust Markov games, the existence of a Nash equilibrium can be affected by the choice of uncertainty sets and specific problem settings. Therefore, I believe it is essential to provide a discussion on the existence of Nash equilibrium within their framework.\n2. Another weakness is the limited technical novelty of the work. The presentation in Sections 3.1 and 3.2 closely resembles that of [1], and the overall methodology appears to be a direct combination of [1] and [2]. The primary contribution seems to be the incorporation of the two-fold subsampling trick from [3] to sharpen the sample complexity bounds."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We design an algorithm that first achieves the optimal upper bound under partial coverage and environment uncertainty in robust two-player zero-sum Markov games."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024sample,\ntitle={Sample Efficient Robust Offline Self-Play for Model-based Reinforcement Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3lXZjsir0e},\nnote={under review}\n}"
},
"abstract": {
"value": "Multi-agent reinforcement learning (MARL), as a thriving field, explores how multiple agents independently make decisions in a shared dynamic environment. Due to environmental uncertainties and fluctuations, policies in MARL must remain robust to tackle the sim-to-real gap. Although robust RL has been extensively explored in single-agent settings, it has seldom received attention in self-play, where strategic interactions heighten uncertainties. We focus on robust two-player zero-sum Markov games (TZMGs) in offline RL, specifically on tabular robust TZMGs (RTZMGs) with a given uncertainty set. To address sample scarcity, we introduce a model-based algorithm (*RTZ-VI-LCB*) for RTZMGs, which integrates robust value iteration considering uncertainty level, applying a data-driven penalty term to the robust value estimates. We establish the finite-sample complexity of RTZ-VI-LCB by accounting for distribution shifts in the historical dataset, without requiring for full state-action space coverage. To the best of our knowledge, we provide the upper bound in RTZMGs, which first achieves optimal sample complexity on the dependency of action spaces. Our algorithm is capable of learning under partial coverage and environmental uncertainty. An information-theoretic lower bound is developed to show that learning RTZMGs is at least as difficult as standard TZMGs when the uncertainty level is sufficiently small. This result confirms the tightness of our upper bound, which is near-optimal for the big uncertainty level, except for the horizon length."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"robust Markov games",
"self-play",
"distribution shift",
"model uncertainty",
"reinforcement learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/8c126e65d2bc6caf6732b97185caf0b5b8f18dbf.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning theory"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Sample Efficient Robust Offline Self-Play for Model-based Reinforcement Learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3lZd6eoPJz | PBCAT: Patch-Based Composite Adversarial Training against Physically Realizable Attacks on Object Detection | main | Active | adversarial robustness;object detection | alignment, fairness, safety, privacy, and societal considerations | 3;5;6;6 | 3;4;4;4 | 4;3;3;3 | 3;2;3;3 | 4;3;2;3 | 5 | 3.75 | 3.25 | 2.75 | 3 | 0.942809 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. What are the differences between square adversarial patches and physically realizable attacks? \n2. Why is it necessary to design defense algorithms specifically for these attacks, and what are the limitations of existing defense methods ?\n3. What is the purpose of designing a binary mask? Could you please explain?\n4. The location of the mask is randomly selected, and then gradient information is used to determine the final patch. What is the difference between this approach and selecting the mask first followed by a random selection of the patch? Is there any advantage to this method ?\n5. Why is the adversarial training method presented in this paper inferior to L_\\infty-bounded adversarial training when applied to clean data?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "1.\tThe topic studied in the paper is practical.\n2.\tThe proposed method demonstrates a degree of generalization, as it does not rely on specific attack algorithms.\n3.\tThe proposed method is effective against common adversarial attack algorithms.\n4.\tThe experiments conducted are relatively comprehensive."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Early efforts have primarily focused on defending against adversarial patches, leaving adversarial training (AT) against a broader range of physically realizable attacks underexplored. In this work, the authors address this gap by proposing a unified AT method to defend against various physically realizable attacks. They introduce PBCAT, a Patch-Based Composite Adversarial Training strategy, which optimizes the model by combining small-area gradient-guided adversarial patches with imperceptible global adversarial perturbations that cover the entire image. This design enables PBCAT to defend not only against adversarial patches but also against unseen physically realizable attacks, such as adversarial textures. Extensive experiments across multiple settings demonstrate that PBCAT significantly enhances robustness against various physically realizable attacks compared to state-of-the-art defense methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper lacks novelty.\n2. The authors should emphasize why standard adversarial training cannot effectively address physically realizable attacks and highlight the advantages of the proposed method presented in this paper. \n3. In lines 251-253, the authors' findings seem meaningless, as unlimited adversarial noise will inevitably lead to a decline in training performance.\n4. Although the training cost of PBCAT is comparable to that of standard training, it still demands additional computational resources due to the gradient post-processing steps (partial partitioning and selection)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See the Weakness."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The method is simple and effective.\n- The experimental results and ablation studies are convincing."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose a adversarial training method to defend against physically realizable attacks. Specifically, they propose a new adversarial patch attack and use them to train the model."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- It is curious that the proposed methods work for naturalistic patch attacks. Experiments on defending naturalistic patch attack will strengthen the paper.\n- No black-box experiments are conducted. For example, FastRCNN trained with the proposed method against different datasets and attacks using other surrogate models such as Yolo.\n- Hyper-parameter tuning and training time is a concern"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- The authors mention physically realizable attacks that extend beyond adversarial patches. Why should these represent distinct attacks if they are computed to fool the same model? Adversarial patches could potentially encompass also features of an adversarial t-shirt, as they are capable of generalizing and representing any potential adversarial texture. For instance, at the end of Section 2.3, the authors suggest that real-world adversarial patches may not generalize well to other types of physical attacks, why?\n\n- The adversarial training is applied only for inf-norm bounded attacks. It would be interesting to explore SOTA patch and texture attacks bounded on different norms. What about the robustness against L-2 Attacks? How much it is the model cpaable of etending the robustness against L-2 Attacks?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- As remarked by different experiments, the proposed method increases the robusteness over different attacks. \n\n- Overall I think that the results are quite intersting, it provides a quite large gap above other strategies."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a novel adversarial training method designed to defend against various physically realizable attacks on object detection tasks. The perturbation for generating adversarial examples during training includes a global perturbation, constrained by an \nℓ-inf norm with a small budget applied across the entire image, and a local patch, randomly positioned within the bounding box. This local patch is composed of sub-patches, with only some selected to inject a larger budget constraint."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The approach may impact accuracy sometime, especially when dealing with large datasets like COCO, as shown in Table 5. However, the effectiveness in terms of improved robustness is noteworthy.\n\n- The authors could have added metrics on training costs in the table to better clarify possible efficiency with respect to other training strategies"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. It is unclear whether the patch selection calculation (i.e., the $l_2$ norm calculations) is performed on the clean image or the adversarial image (the one containing the adversarial patch).\n* Could you please clarify this?\n* Additionally, what is the rationale behind choosing a square-shaped mask?\n* Have you considered experimenting with different norms beyond the $l_2$ norm?\n\n2. In the model training, are the weights initialized to random values or pre-trained weights? If random initialization is used, the object detector may risk overfitting on the Inria dataset, which contains only a few hundred images. This could explain the inconsistencies observed between the results on MS-COCO and Inria.\n\n3. In line 345, the total number of sub-patches is set to $n^2=64$ , and in lines 238-239, you mention that the top half are selected, indicating that 32 patches are chosen. However, in the ablation study regarding the number of sub-patches used during the selection process (Table 3), only a single value (16) is presented as a portion of the sub-patches, since using 64 means utilizing the entire set. This leads me to infer that 16 is deemed the optimal value. Does using 32 sub-patches result in better performance? It would be beneficial to explore additional values in this experiment.\n\n4. Could you provide some insights into the results presented in Table 2, particularly concerning the \"Global\" component? I find it challenging to understand why the \"Global\" component enhances robustness against AdvTexture and AdvCat attacks, given the significant differences in perturbation styles between them. Additionally, why does robustness decrease against AdvTexture when the Patch and Partition components are added (Lines 3 and 4)?\n\n5. Following the above question, in line 465 it is stated that “Partition” denotes the patch partition strategy. What is the strategy other than “Gradient”? what does Line 4 in Table 2 mean?\n\n5. While I acknowledge that the paper focuses on patches attached to objects, it would be beneficial to evaluate the proposed approach against attacks that place patches in different locations (e.g., DPatch [5]) and to study the effect of the \"Global\" component on these attacks. Demonstrating the ability to mitigate the impact of such patches could significantly enhance the paper's contributions."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper addresses a practical yet underexplored topic: adversarial training for defending object detection models against realizable attacks.\n2. The evaluation setup is well-detailed, and the provided code ensures easy reproducibility.\n3. The proposed method achieves excellent performance in terms of adversarial robustness."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors introduce a patch-based adversarial training technique designed to improve the robustness of object detection models against both patch-based and more recent texture-based attacks. The method involves two types of perturbations: local perturbations applied to the attacked object and a global perturbation affecting the entire image. The global perturbation is aimed at enhancing the robustness against texture-based attacks. In their evaluation, the authors compare their technique to one adversarial training (AT) approach and several non-AT methods across three patch-based attacks. They also present ablation studies to assess the impact of various hyperparameters. Finally, the evaluation is extended to other object detection models to demonstrate the method's broader applicability."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Incomplete literature review – while the authors state that there are no previous works that specifically propose patch-based AT for object detection, a more in-depth review of the literature would have revealed that techniques such as Ad-YOLO [1] and PatchZero [2] already exist (and should be compared to). Additionally, including comparisons to more recent non-AT methods (e.g., PatchBreaker [3], NAPGuard [4]) would strengthen the paper's overall contribution.\n\n2. Lack of novelty – The proposed method appears relatively simple, primarily combining existing techniques adapted for object detection without introducing substantial new contributions, aside from the patch partitioning and selection strategy.\n\n3. Experiments - While the authors conduct a relatively comprehensive evaluation, several aspects are lacking:\n\n* Models: Since the focus is on person detection, which typically involves real-time scenarios, the evaluation should prioritize low-latency models (e.g., one-stage detectors) rather than slower ones like Faster R-CNN. Including YOLO models, particularly the most recent versions, would have been more relevant, as they are widely used in real-time object detection.\n* \"Clean\" results: While the authors acknowledge the performance drop on clean images as a limitation, the degradation in accuracy is significant, especially when compared to (Li et al. 2023) in Tables A1, 5, and 6. This raises concerns about whether the improved robustness stems from a robustness-accuracy trade-off. A more fair comparison would require matching the AP on clean images across methods before assessing robustness. \n* Results discussion: The results are presented with limited interpretation. The discussion would benefit from addressing edge cases and explaining unintuitive findings (as highlighted in question 4 below).\n\n4. Presentation - the submission is held back by the writing quality, particularly in the method section, mainly focused around the partially existing formulaic descriptions. For instance, the number of selected sub-patches should be parametrized (with an accompanying equation or algorithm) to better align with the presentation of the ablation study in Section 4.3.2.\n\nMinor comments:\n- Algorithm 1 – the use of $m$ and $m_p$ is confusing.\n- The placement of the tables on Page 9 makes them hard to read.\n- Best “Clean” performance should also be marked with bold.\n\n[1] Ji, N., Feng, Y., Xie, H., Xiang, X., & Liu, N. (2021). Adversarial yolo: Defense human detection patch attacks via detecting adversarial patches. arXiv preprint arXiv:2103.08860.\n\n[2] Xu, K., Xiao, Y., Zheng, Z., Cai, K., & Nevatia, R. (2023). Patchzero: Defending against adversarial patch attacks by detecting and zeroing the patch. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 4632-4641).\n\n[3] Huang, S., Ye, F., Huang, Z., Li, W., Huang, T., & Huang, L. (2024). PatchBreaker: defending against adversarial attacks by cutting-inpainting patches and joint adversarial training. Applied Intelligence, 54(21), 10819-10832.\n\n[4] Wu, S., Wang, J., Zhao, J., Wang, Y., & Liu, X. (2024). NAPGuard: Towards Detecting Naturalistic Adversarial Patches. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 24367-24376).\n\n[5] Liu, X., Yang, H., Liu, Z., Song, L., Li, H., & Chen, Y. (2018). Dpatch: An adversarial patch attack on object detectors. arXiv preprint arXiv:1806.02299.\n\n"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "The first work showing strong robustness against various physically realizable attacks"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024pbcat,\ntitle={{PBCAT}: Patch-Based Composite Adversarial Training against Physically Realizable Attacks on Object Detection},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3lZd6eoPJz},\nnote={under review}\n}"
},
"abstract": {
"value": "Object detection plays a crucial role in many security-sensitive applications, such as autonomous driving and video surveillance. However, several recent studies have shown that object detectors can be easily fooled by physically realizable attacks, \\eg, adversarial patches and recent adversarial textures, which pose realistic and urgent threats. Adversarial Training (AT) has been recognized as the most effective defense against adversarial attacks. \nWhile AT has been extensively studied in the $l_\\infty$-bounded attack settings on classification models, \nAT against physically realizable attacks on object detectors has received limited exploration. \nEarly attempts are only performed to defend against adversarial patches, leaving AT against a wider range of physically realizable attacks under-explored.\nIn this work, we consider defending against various physically realizable attacks with a unified AT method. \nWe propose PBCAT, a novel Patch-Based Composite Adversarial Training strategy. PBCAT optimizes the model by incorporating the combination of small-area gradient-guided adversarial patches and imperceptible global adversarial perturbations covering the entire image. With these designs, PBCAT has the potential to defend against not only adversarial patches but also unseen physically realizable attacks such as adversarial textures.\nExtensive experiments in multiple settings demonstrated that PBCAT significantly improved robustness against various physically realizable attacks over state-of-the-art defense methods. Notably, it improved the detection accuracy by 29.7\\% over previous defense methods under one recent adversarial texture attack."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"adversarial robustness",
"object detection"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/092a436a6d1ec5d3fed0de6615f072af2eb4aa31.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "PBCAT: Patch-Based Composite Adversarial Training against Physically Realizable Attacks on Object Detection"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3lfSk8NWWp | Unsupervised 2D Molecule Drug-likeness Prediction based on Knowledge Distillation | main | Active | Drug-likeness Prediction;Molecule Representation;Molecular Property Prediction | applications to physical sciences (physics, chemistry, biology, etc.) | 3;3;5;5 | 1;4;4;4 | 2;2;2;2 | 2;2;2;2 | 3;3;3;3 | 4 | 3.25 | 2 | 2 | 3 | 0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Interpretability of Scoring: Could the authors clarify how the gap between teacher and student outputs specifically reflects drug-likeness, possibly by linking it to characteristics like toxicity markers or functional groups?\n2. Hyperparameter Sensitivity: How sensitive is the model to masking ratios in atom/bond modeling tasks?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper offers a scalable approach for drug-likeness screening, with practical applications in drug discovery and unsupervised molecular learning.\n2. By using 2D molecular graphs instead of SMILES, the approach effectively reduces biases commonly associated with SMILES-based drug-likeness scoring.\n3. The method consistently demonstrates superior performance compared to baseline models, highlighting its robustness and effectiveness.\n4. The knowledge distillation approach proposed in the paper might be an effective way to address challenges with unbalanced datasets in drug discovery, where true positives are often limited."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents an unsupervised method for predicting drug-likeness in molecules that exploits 2D features of molecules. It uses a knowledge distillation approach with two models: a \"teacher\" model trained on a large dataset of molecules, which learns molecular topology through tasks like masked atom and bond prediction, and a \"student\" model trained only on real drugs. The student model mimics the teacher’s output on drug-like molecules but diverges on non-drug molecules, allowing for a drug-likeness score based on the difference between the models’ outputs. Experimental results show that this method outperforms existing models and is less affected by biases, offering a potentially more accurate way to determine drug likeliness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The scoring method relies solely on the difference between teacher and student models. Including additional criteria, such as molecule toxicity features, could improve robustness.\n2. While the model leverages 2D molecular graphs, drug effectiveness often depends on 3D molecular interactions with proteins, which this paper does not address as a limitation.\n3. To assess the model's true potential in drug discovery, testing on novel, unseen datasets and conducting out-of-distribution benchmarks would be valuable.\n4. In practical applications like drug discovery, an interpretability analysis would be beneficial to understand the model’s behavior."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- The authors should better clarify what the novelty of this work is, also accounting for the comments above.\n- The authors should introduce more baselines, in particular focusing on:\n - State-of-the-art molecular generative methods (e.g., based on graph-based representations) used to estimate likelihoods, instead of SMILES-based.\n - Other self-supervised methods used to learn general chemical representations, and to define the chemical space.\n - Outlier detection methods used to define novelty. \n\nIn this context, the authors should better clarify the original contributions proposed by this work."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The topic of the paper is relevant, as an improved quantification of drug-likeness can accelerate the drug discovery process and enable other approaches.\n- The method is clearly explained.\n- Analysis and ablation studies help develop an understanding of the proposed approach."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on drug-likeness prediction based on chemical structures. Instead of framing the problem as a supervised task or likelihood-based estimation, this work proposes an approach based on self-supervised learning followed by knowledge distillation. The proposed method ios compared against several baselines, including supervised classification, likelihood-based (RNN), and QED. The approach is tested on multiple datasets. Multiple analysis and ablation studies are conducted, providing further insights into the results."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The main limitations of this work are related to its novelty, lack of baselines, and limited clarity on its overall positioning. \n- First of all, in the introduction and motivation, the paper distinguishes itself from other unsupervised-based approaches based on the fact that previous work leverages SMILES representations, while this work leverages 2D graphs. However, it is actually possible (and typically done) to compute likelihoods based on 2D graph representations. Indeed, this is typically one of the main ways graph (and molecule) generative methods are evaluated (see, e.g., Diamant et al., 2023 ICML). It is in general well known that for molecular tasks, graph-based representations outperform SMILES-based representations, both for supervised and generative tasks (see, for example, leaderboard https://ogb.stanford.edu/docs/leader_graphprop/). Therefore, using graph-based representations (which have been state-of-the-art for years) instead of SMILES-based representations does not seem to be particularly novel.\n- This paper introduced a self-supervised framework that appears to be very similar to previous work (see \"Evaluating Self-Supervised Learning for Molecular Graph Embeddings\", NeurIPS 2023 for some examples). In this context, the choice of the self-supervised model introduced in this paper appears not novel and arbitrary. \n- This method is framed as novel compared to existing methods based on outlier-based estimation. However, the proposed approach is actually an outlier estimation technique, given that the drug-likeness score is obtained as difference between a model trained on the whole chemical space, and a model trained only on drug-like (i.e., \"known\") molecules. Therefore, more advanced outlier estimation methods should be evaluated.\n\nOverall, it is not clear what the contribution and novelty of this paper is. Additionally, several critical baselines are missing."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weaknesses"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper is well written and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors proposed a 2D-based unsupervised drug-likeness prediciont method. They performed knowledge distribution by pretraining a teacher model on both positive and negative molecules and futher trained a student model on positive drug-like molecules only, and further minimized the embedding between teacher model and student model."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There are some major concerns about this paper:\n1. The performance of the RNN paper looks great enough according to Table 1, even in the BondError dataset proposed by the authors, the RNN performance is pretty great. I see the main disadvantage of the RNN method is about its bias on the SMILES length. Thus, the authors should proposed some new datasets which contain molecules with different scales of SMILES lengths. Even though the authors showed the comparsion between RNN and their method on different scales of SMILES lengths in Figure 5, which partially address this concern, it's still not that complete. And the authors didn't display the number of molecules for different lengths.\n2. The baseline methods are too weak. The RNN method was way too old. Even the SMILES-BERT was trained 5 years ago. I wonder if the authors would use any transformer for comparison.\n\nThere are some other minor concerns about this paper:\n1. The score is based on atom embedding level, why not consider bond embedding as well?\n2. Be more careful about the potential data leakage problem, even though it might be hard to avoid when there is pretraining stage. Consider scaffold split.\n3. There are two $L_{mam}$ in Formula (1)\n4. Only positive examples are used in the training of the student model, what if introduce some negative examples and perform constrastive learning?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 1
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Could you please consider a comparison with the baselines Molformer, Graphformer and ImageMol when they are finetuned on the druglikeliness prediction tasks?\n\nCould you please compare to the GraphMVP methods?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper was well written, the related works and the motivation behind their methods is well explained. \nThe idea of using the difference between the teacher models and student models likeliness prediction is interesting.\nInteresting analysis on the bias of RNN toward short sequences."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper addresses drug-likeness prediction challenges and introduces a novel knowledge distillation approach. In this method, a teacher model is pretrained using 2D molecular graphs with atom/bond masking predictive modeling, trained on a large dataset comprising both drugs and non-drugs. The student model, by contrast, is trained solely on drugs, separate from the teacher's dataset. The final drug-likeness prediction is based on the difference in likelihood predictions between the teacher and student models. \n\nThe authors evaluate their method using standard benchmarks, comparing it to five baselines. The baselines include two classes: supervised approaches (QED, a graph neural network (GCN), and a recurrent neural network (RNN)) and unsupervised methods (GlocalKD and HimNet). In four subsets of the FDA-approved drugs dataset, the proposed approach significantly outperforms these baselines. An ablation study is also conducted to examine the contributions of pretraining and distillation, alongside analyses highlighting the RNN’s bias toward shorter drug molecules. While the source code is not yet available, the authors have committed to open-sourcing it in the future."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Although this work targets a specific molecular property prediction task, it does not thoroughly discuss or compare against a substantial body of research in molecular representation learning. For instance, methods based on molecular fingerprints and GNNs, such as *ADMET Property Prediction through Combinations of Molecular Fingerprints* ([arXiv:2310.00174](https://arxiv.org/abs/2310.00174)), have shown strong results in ADMET prediction and could be readily adapted to tasks like drug-likeness prediction. Additionally, recent advancements in pretrained models—such as *Molformer* ([Nature](https://www.nature.com/articles/s42256-022-00580-7)), *Graphormer* ([GitHub](https://github.com/microsoft/Graphormer)), and *ImageMol* ([GitHub](https://github.com/HongxinXiang/ImageMol))—would be valuable baseline comparisons for the present study.\n\nThe novelty of the proposed pretraining tasks also appears limited, as atom and bond masking in graph pretraining has become a widely adopted approach. For example, *GraphMVP* ([OpenReview](https://openreview.net/pdf?id=xQUe1pOKPam)) employs similar masking strategies to pretrain GNNs, covering both 2D and 3D graphs, with masking applied to parts of 2D graphs as well.\n\nFurthermore, the source code has not been made publicly available, hindering reproducibility of the experimental results."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024unsupervised,\ntitle={Unsupervised 2D Molecule Drug-likeness Prediction based on Knowledge Distillation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3lfSk8NWWp},\nnote={under review}\n}"
},
"abstract": {
"value": "With the research significance and application value, drug-likeness prediction aims to accurately screen high-quality drug candidates, and has attracted increasing attention recently. In this regard, dominant studies can be roughly classified into two categories: (1) Supervised drug-likeness prediction based on binary classifiers. To train classifiers, the common practice is to treat real drugs as positive examples and other molecules as negative ones. However, the manual selection of negative samples introduces classification bias into these classifiers. (2) Unsupervised drug-likeness prediction based on SMILES representations, such as an RNN-based language model trained on real drugs. Nevertheless, using SMILES to represent molecules is suboptimal for drug-likeness prediction, which is more relevant to the topological structures of molecules. Besides, the RNN model tends to assign short-SMILES molecules with high scores, \nregardless of their structures. In this paper, we propose a novel knowledge distillation based unsupervised method, which exploits 2D features of molecules for drug-likeness prediction. The teacher model learns the topology of molecules via two pre-training tasks on a large-scale dataset, and the student model mimic the teacher model on real drugs. In this way, the outputs of these two models will be similar on the drug-like molecules while significantly different on the non-drug-like molecules. To demonstrate the effectiveness of our method, we conduct several groups of experiments on various datasets. Experimental results and in-depth analysis show that our method significantly surpasses all baselines, achieving state-of-the-art performance. Particularly, the prediction bias of SIMILES length is reduced in our method. We will release our code upon the acceptance of our paper."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Drug-likeness Prediction",
"Molecule Representation",
"Molecular Property Prediction"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/f551d5df651e2426c956085f20f8b60d5e3f29a1.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to physical sciences (physics, chemistry, biology, etc.)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Unsupervised 2D Molecule Drug-likeness Prediction based on Knowledge Distillation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3llRc6oXEW | Link Prediction with Untrained Message Passing Layers | main | Active | graph neural networks;untrained message passing layers;link prediction;path-based similarity measures | learning on graphs and other geometries & topologies | 3;3;3;5 | 3;5;3;4 | 3;3;2;2 | 2;2;2;2 | 2;3;1;3 | 3.5 | 3.75 | 2.5 | 2 | 2.25 | 0.174078 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "NA"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Using untrained message passing layers for GNN can be a computationally efficient approach, which is an important topic in the community."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work explores the use of untrained message passing layers in GNN for link prediction tasks. The authors showed that, experimentally, untrained message passing layers provides competitive performance when compared against fully trained layers for link prediction. The authors also provided a simple theoretical analysis to justify their claims."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Weakness:\n\nOverall, the presented work gives the impression of an early draft, and I find it challenging to fully assess its contributions in its current form. Below are some clear issues:\n\n1. The presented “theoretical results” are poorly organized, and it is very challenging to judge its correctness given there is no clear distinction between the authors’ contributions and existing results. To be honest, I am not very sure what is the theoretical contributions provided by the authors. The results rely on oversimplified assumptions (e.g., orthonormality line 304); and the authors were linking random things together (e.g., PageRank line 348). It is very challenging for me to decipher what the authors what to convey here.\n\n2. It does not seem like the experimental results support the authors’ claim. In many cases, the untrained variant of the network performs very poorly, especially in Hits@100 dataset. If the authors want to claim the simplified network performs very well, the paper should be written as such.\n\n3. Lack of ablation experiments, intuitive results on synthetic datasets, or example results/visualizations on representative datasets.\n\n4. The presented figure 1 is of low quality. Maybe at least show the standard deviation across different runs?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Can you comment on why orthogonality would be expected in homophilic networks?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is written well and the presentation is good.\n\nThe analysis of the inter layer values to path-based measures is nice, though not unexpected.\n\nExperimental results have been carried out on the usual link prediction benchmarks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes untrained and linear message passing layers for graph neural networks for the task of link prediction. Theoretical analysis relates the values computed at the intermediate layers to path-based and random walk based connectivity measures. An assumption is made regarding the orthogonality of the initial node features. Experimentally, the method is shown to be comparable to trained and non-linear layers in a GNN."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The idea of untrained and linear layers (as acknowledged by the authors) has previously appeared for node classification in Wu et al, ICML 2019. So, the idea has limited novelty.\n\nThe assumption of orthogonality may not always hold especially under conditions of homophily where neighboring nodes have similar features.\n\nLink prediction has been studied in many works and the impact of another paper is limited."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* While running the code I noticed that the inference for the untrained layers uses considerably more memory than even training the full GNN. So much so that I ran out of memory on my laptop. I’m wondering, could your code actually be used for much larger graphs?\n* The meaning of this sentence was a bit unclear to me: “ Since the simplified architectures consist of UTMP layers followed by a trainable linear layer, the consideration of UT models which do not include the linear layer also covers all possible ablation studies” Could you clarify?\n* Some other suggestions for improvements were already described together with the weaknesses. \n\nIn light of the weaknesses I mentioned, I tend towards a reject. Especially the empirical analysis has to be improved considerably."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* To the best of my knowledge, this is the first application of untrained message-passing layers to link prediction.\n* The empirical results show that untrained layers perform reasonably well.\n* Some theoretical observations are included to complement the empirical analysis."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper investigates the use of untrained message-passing layers for link prediction tasks. Both a completely untrained model and a model with a trainable layer after the message passing layers are compared to standard trained message-passing layers. The authors provide theoretical observations to support their empirical analysis."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* Lack of novelty: Most of the work builds directly on top of Wu et al. and mirrors many parts of it. The architecture and setup are almost exactly the same, and even some claims, like the benefit of efficiency and interpretability, are taken straight from there. This is not to say that they are not true, but for example in the case of interpretability, there is not much evidence provided beyond the fact that the architecture is simpler. \n\n* The theoretical contribution in the paper feels somewhat limited, with unclear takeaways. Although the authors aim to support their work with theory, the analysis falls short of substantiating the core claims. For instance, the authors state that “Our theoretical analysis further provides insights into the effectiveness of widely used node initialization schemes such as one-hot encodings and random features.” However, as this theoretical analysis is restricted to feature vectors that are pairwise orthonormal - often true for one-hot encodings and random features - it doesn’t convincingly explain why these should be more effective than other potential initializations that lack this precondition. Additionally, this analysis seems somewhat peripheral to the main focus of the paper: comparing untrained and trained MPNNs. To strengthen this aspect, I suggest clearly outlining the theoretical contributions by structuring them into theorems with proofs and more directly linking them to the paper’s central claim.\n\n* The main point of the paper is the fact that untrained message-passing layers perform very well in comparison to their trained counterparts. To evaluate this properly, a good benchmarking framework is necessary that guarantees that the difference in predictive performance is coming from what the authors claim it’s coming from, and especially that the comparison is fair. This is where I have my main problem with this paper. No established benchmarking framework is used, and almost no attention is paid to the fact that link prediction tasks are notoriously hard to evaluate. I would recommend the authors to consult recent works like [1] and [2], which go into more detail on various problems, but let me name some important ones here: Looking at the provided code, it looks like negative sampling of edges is done randomly, which is likely to cause bad performance on these tasks. This is problematic because it’s not clear if untrained layers really perform that well in comparison or if the training procedure was just not good. From Table 2, I can tell that in several cases, the final trained linear layer (which was also trained with random negative sampling) performs worse than the untrained one. This is really surprising to me and could be due to the negative samples that were not chosen well. While looking at the code, I also noticed that the test dataset for the untrained variants is not the same as for the trained ones because the random link split in the dataset transform is initialized with `num_val=0.00, num_test=0.1` in contrast to `num_val=0.05, num_test=0.1` for the trained counterpart. While I don’t expect this one to make a huge difference, it just goes to show that the benchmarking is not done thoroughly enough to warrant the claims made in the paper. Getting link prediction right is actually quite hard, considerably more so than for node and graph-level prediction tasks, and a paper that builds on top of these results that much should put more scrutiny into it. My proposal is this: Use an existing benchmarking framework. This also makes it possible to compare the results to other papers and to run with more recent datasets from ogb, which are completely missing from this analysis. On a side note, I think that these would be quite important as they are bigger and could demonstrate the claimed scalability advantage of untrained layers.\n\n[1] Li, Juanhui, et al. \"Evaluating graph neural networks for link prediction: Current pitfalls and new benchmarking.\" Advances in Neural Information Processing Systems 36 (2024). \n\n[2] Zhu, Jing, et al. \"Pitfalls in link prediction with graph neural networks: Understanding the impact of target-link inclusion & better practices.\" Proceedings of the 17th ACM International Conference on Web Search and Data Mining. 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weakness."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper introduces untrained message-passing architectures, extending existing research from node classification to link prediction. This perspective is relatively novel and addresses the computational limitations of GNNs. The untrained models, by eliminating learnable parameters, are shown to be faster and more resource-efficient, making them suitable for large-scale applications.\n\n\n2. Theoretical results provide a deeper understanding of how UTMP layers approximate traditional path-based link prediction metrics (e.g., random walks and common neighbors), making the models highly interpretable."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper explores the use of untrained message-passing layers (UTMP) for link prediction tasks in graph neural networks (GNNs). The authors propose simplifying GNN architectures by removing trainable parameters and nonlinear components, resulting in interpretable and computationally efficient models. Their research finds that these simplified architectures can often outperform or match the performance of fully trained GNNs in link prediction tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Limited baselines are compared. Path-based methods and edge-wise methods should be compared. This doesn't mean the authors should change them into untrained models and do comparison. The author discuss the theoretical relationship with path-based methods, so the emperical comparison is needed to validate the theory.\n\n2. Limited datasets are included. Large datasets like OGB datasets are not included so the application is limited. \n\n3. For non-attributed graphs the results of original GNNs are much better than on the attributed graphs, compared to S-models and UT-models. Can the authors do some ablations to discuss this observation? Maybe it's because of the one-hot encoding? Can the authors show the results of one-hot encoding in attributed graphs?"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Untrained message passing layers in graph neural networks outperform trained counterparts for link prediction, offering efficiency and interpretability, especially with high-dimensional features."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024link,\ntitle={Link Prediction with Untrained Message Passing Layers},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3llRc6oXEW},\nnote={under review}\n}"
},
"abstract": {
"value": "In this work, we explore the use of untrained message passing layers in graph neural networks for link prediction. The untrained message passing layers we consider are derived from widely used graph neural network architectures by removing trainable parameters and nonlinearities in their respective message passing layers. Experimentally we find that untrained message passing layers can lead to competitive and even superior link prediction performance compared to fully trained message passing layers while being more efficient and naturally interpretable, especially in the presence of high-dimensional features. We also provide a theoretical analysis of untrained message passing layers in the context of link prediction and show that the inner product of features produced by untrained message passing layers relate to common neighbour and path-based topological measures which are widely used for link prediction. As such, untrained message passing layers offer a more efficient and interpretable alternative to trained message passing layers in link prediction tasks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"graph neural networks",
"untrained message passing layers",
"link prediction",
"path-based similarity measures"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/7270511aba665d3ddc04383bc3048715e869dc40.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on graphs and other geometries & topologies"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Link Prediction with Untrained Message Passing Layers"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3m6VqesEMw | T$^3$-S2S: Training-free Triplet Tuning for Sketch to Scene Generation | main | Active | Sketch-to-scene generation;training-free diffusion model;cross-attention mechnism | generative models | 5;5;5;6 | 4;5;4;5 | 3;2;3;2 | 2;2;2;2 | 2;2;3;2 | 5.25 | 4.5 | 2.5 | 2 | 2.25 | 0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. I wonder whether the proposed method is effective for other types of scenes. Exploring and presenting results on different scene categories would be beneficial.\n2. I suggest to incorporate user studies to enhance the robustness of the experimental validation.\n3. The appendix seems to be directly attached after the references. It might be more appropriate to format the appendix according to the conference guidelines, ensuring it is properly integrated or submitted as a supplementary document as required."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The proposed Prompt Balance and Characteristics Prominence strategies contribute to enhancing the quality of generated scenes, particularly in handling complex scenes with multiple instances.\n2. The method is relatively simple and does not require additional training, making it efficient and practical."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a Training-free Triplet Tuning (T³-S2S) method for sketch-to-scene generation, aiming to enhance the quality of generated scenes from sketches without additional training. The authors identify challenges in existing diffusion models related to imbalanced prompt energy and value homogeneity in the cross-attention mechanism, which lead to missing or coupled instances in complex scenes. To address these issues, they introduce two strategies: Prompt Balance and Characteristics Prominence. Additionally, they incorporate Dense Tuning from Dense Diffusion to refine attention maps. The proposed method is evaluated qualitatively on game scene generation tasks, demonstrating improvements in generating detailed, multi-instance scenes that align with input sketches and prompts."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The proposed Prompt Balance and Characteristics Prominence strategies appear to be incremental improvements on existing techniques such as TopK methods. Dense Tuning is adapted from Dense Diffusion. Therefore, the paper may lean more towards engineering optimization rather than presenting significant novel techniques.\n2. The experimental comparisons use models based on different versions of Stable Diffusion, with the proposed method and T2I Adapter using SDXL (a more advanced model), while Dense Diffusion is based on SD v1.5. Since SDXL offers higher generation quality than SD1.5, this results in an unfair comparison. Considering that this work draws inspiration from Dense Diffusion, it would be more appropriate to either adapt Dense Diffusion to SDXL or apply the proposed method to SD1.5 to ensure a fair evaluation.\n3. The experiments focus primarily on game scenes, with limited variety in the types of scenes presented. Some examples are repeated in the paper, and the lack of diverse examples may not fully demonstrate the method's generalizability to other contexts. \n4. The paper focuses heavily on qualitative analysis and lacks quantitative experiments. \n5. Line 360 and 361: citations of SDXL and ControlNet are incorrect."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Since this method can enhance the performance of existing controllable generation models without the need for training, is it possible to include experimental analyses based on more diverse existing models, such as the T2I-Adapter (https://arxiv.org/abs/2302.08453) that the article compares?\nIf additional experiments could be added to demonstrate this, it would better showcase the universal applicability of the method proposed in the article, significantly enhancing its contribution.\n\n2. Can the method proposed in this article be applied to a wider range of scenarios? From the experimental results presented in the article, it seems that simply altering the prompt should be sufficient to produce a more diverse array of scene images. I hope to see more experimental outcomes. Is it possible to incorporate a wider variety of input types (for example, black and white hand-drawn sketches)? Can the output be expanded to include a richer variety of scene images (for example, real-world images), not just limited to the isometric view of game scenes?\nParticularly, the experimental results in Figure 6 of the article suggest that the method's control over the style of generated images should stem solely from the specific input prompt \"Isometric view of game scene.\" Therefore, if the article could showcase more use cases across a variety of application scenarios, it would greatly increase the value of the method proposed in the article."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This article addresses a very meaningful area, utilizing the approach of scene generation through sketches.\n\n2. The method requires no training and can be seamlessly integrated into existing controllable generation models, thereby enhancing the models' generative performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This article primarily focuses on the task of sketch-to-scene generation. Specifically, for complex scenes with multiple detailed objects, previous methods sometimes miss small or uncommon instances. Addressing this issue, the article proposes a training-free triplet adjustment method for sketch-to-scene generation. This method can be directly applied to existing SDXL and ControlNet models, enabling them to effectively tackle the multi-instance generation problem, including prompt balancing, feature highlighting, and dense adjustment. The proposal is made after reviewing the entire cross-attention mechanism. This solution revitalizes the existing ControlNet model, allowing it to effectively handle multi-instance generation, involving prompt balancing, feature highlighting, and dense adjustment. Experiments demonstrate that this triplet adjustment method enhances the performance of existing sketch-to-image models, enabling the generation of detailed, multi-instance 2D images that closely follow input prompts and enhance visual quality in complex multi-instance scenes."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The experimental content of this article is somewhat lacking in depth. From the presented experimental results, it only includes the generation of isometric view of game scene using colored sketch lines as control inputs. This raises the question of whether the method proposed in this article has universal applicability to scene sketch generation tasks.\n\n2. According to the description in this article, this training-free method at the feature level should be universally applicable to various controllable generation models. The method proposed in this article is based on ControlNet, but the article does not sufficiently discuss whether this approach would still be effective when integrated into other controllable generation models."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "see weakness."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper provides a thorough analysis of text embeddings and the value matrices in cross-attention, revealing the issues of prompt energy imbalance and value matrix homogeneity.\n\n2. The proposed method seemly addresses the challenges associated with sketch-to-scene generation involving multiple subjects."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper identifies a phenomenon in sketch-to-scene generation where certain instance are not effectively represented in complex, multi-instance scenes. The authors attribute this issue to the imbalance of prompt energy and the homogeneity of value matrices. To address these challenges, the paper proposes a method that incorporates prompt balance, characteristic prominence, and dense tuning for sketch-to-image generation in complex scenes."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My primary concern lies with the clarity of the writing and the evaluation methodology:\n\n1. The writing lacks clarity and the notation system is somewhat confusing.\n2. Although the method does not require training, the optimization during inference may increase time demands, yet the paper does not report specific inference times.\n3. The results are solely qualitative, lacking quantitative comparisons. The effectiveness of the proposed modules is not validated with measurable metrics, raising concerns about potential cherry-picking in qualitative assessments. While it may be challenging to establish reasonable quantitative benchmarks for this task, there even is no mention of conducting user studies to support the findings."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "(1) The prompt balance mechanism could also be applied to text-to-image models. I recommend testing the effectiveness of the prompt balance mechanism on some multi-instance text-to-image benchmarks. Specifically, you could compare it with training-free optimization algorithms designed for multi-instance T2I scenarios, such as AAE [1] and StructureDiff [2].\n\n(2) When performing the characteristics prominence mechanism, the paper states that the optimal setting for K is 2. When the number of instances exceeds K, how does your characteristics prominence mechanism ensure that each instance is effectively enhanced? More specifically, how does it handle multiple instances that belong to the same category but have different fine-grained attributes? For example, when dealing with five cats—one red, one blue, one green, one black, and one white.\n\n(3) I have some questions regarding specific details. When performing characteristics prominence, should the operation be applied on cross_attention(I, T) or on I + cross_attention(I, T), where I is the image feature and T is the text embedding?\n\n(4) I believe further experiments on widely used multi-instance generation benchmarks are necessary to better demonstrate the effectiveness of your method.\n\n[1] Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models.\n\n[2] TRAINING-FREE STRUCTURED DIFFUSION GUIDANCE FOR COMPOSITIONAL TEXT-TO-IMAGE SYNTHESIS."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Methodologically, T3-S2S demonstrates certain innovations compared to previous approaches, and the results presented by the authors indicate that this method indeed yields notable improvements."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes an approach, termed Triplet Tuning for Sketch-to-Scene (T3-S2S), a training-free method designed to enhance T2I generation capabilities in multi-instance scenarios, effectively mitigating challenges such as instance omission. The T3-S2S method introduces a prompt balance mechanism to automatically balance the energy of each instance within text embeddings and incorporates a characteristics prominence mechanism that strengthens cross-attention by highlighting Top-K indices within each channel, ensuring that essential features are more robustly represented based on token sketches. Methodologically, T3-S2S demonstrates certain innovations compared to previous approaches, and the results presented by the authors indicate that this method indeed yields notable improvements. Therefore, at this stage, I would rate this paper as [weak accept]. However, I believe the current experimental results are still insufficient; please see the question section for details. I look forward to the authors' responses."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I believe the current experimental results are still insufficient. First, the paper does not include comparisons with state-of-the-art (SOTA) methods on popular benchmarks. Additionally, it lacks quantitative comparison results, which are essential for a comprehensive evaluation of the method’s effectiveness."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024tss,\ntitle={T\\${\\textasciicircum}3\\$-S2S: Training-free Triplet Tuning for Sketch to Scene Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3m6VqesEMw},\nnote={under review}\n}"
},
"abstract": {
"value": "Scene generation is crucial to many computer graphics applications. Recent advances in generative AI have streamlined sketch-to-image workflows, easing the workload for artists and designers in creating scene concept art. However, these methods often struggle with complex scenes with multiple detailed objects, sometimes missing small or uncommon instances.\nIn this paper, we propose a Training-free Triplet Tuning for Sketch-to-Scene (T$^3$-S2S) generation after reviewing the entire cross-attention mechanism. This scheme revitalizes the existing ControlNet model, enabling effective handling of multi-instance generations, involving prompt balance, characteristics prominence, and dense tuning. \nSpecifically, this approach enhances keyword representation via the prompt balance module, reducing the risk of missing critical instances. It also includes a characteristics prominence module that highlights TopK indices in each channel, ensuring essential features are better represented based on token sketches. Additionally, it employs dense tuning to refine contour details in the attention map, compensating for instance-related regions.\nExperiments validate that our triplet tuning approach substantially improves the performance of existing sketch-to-image models. It consistently generates detailed, multi-instance 2D images, closely adhering to the input prompts and enhancing visual quality in complex multi-instance scenes."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Sketch-to-scene generation",
"training-free diffusion model",
"cross-attention mechnism"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/07c94f04ac11a467d698d51b9d76862811d9d612.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "T$^3$-S2S: Training-free Triplet Tuning for Sketch to Scene Generation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3ms8EQY7f8 | Simulating Human-like Daily Activities with Desire-driven Autonomy | main | Active | desire;autonomy;daily activities; | applications to robotics, autonomy, planning | 3;5;5;5 | 4;5;3;2 | 2;2;2;2 | 1;2;2;2 | 3;4;4;2 | 4.5 | 3.5 | 2 | 1.75 | 3.25 | -0.258199 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Listed in the weaknesses section itself.\n\n- Make section 3 more clear by adding more details.\n- Give better reasoning for using the 3 dimensions of naturalness, coherence and plausibility.\n- measure the reliability of GPT-4o evaluations."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Originality: The paper presents a novel approach to simulating human-like daily activities using a desire-driven framework inspired by Maslow’s hierarchy of needs. Unlike traditional AI agents that rely on specific instructions or task-based rewards, the Desire-driven Autonomous Agent (D2A) framework introduces intrinsic motivation as the driving factor. This approach is unique in that it models a human-like motivational system, enabling the agent to select actions autonomously based on intrinsic desires rather than predefined goals.\n\nQuality: The paper is good quality with descriptive figures and clear results.\n\nClarity: The paper is well structured with a clear distinction provided between their proposed method and past methods and how theirs performs better.\n\nSignificance: This work holds significance for fields focused on human-like AI, agent-based simulations, and real-world applications requiring adaptive behaviour. By adopting an intrinsic motivation model, this framework could be used for applications such as social robotics, interactive gaming, and assistive technology, where human-like adaptability and engagement are essential."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a novel framework, the Desire-driven Autonomous Agent (D2A), designed to simulate human-like behaviour in daily activities. Traditional task-oriented AI agents primarily operate based on specific instructions or external rewards, but this approach often limits their ability to display intrinsic motivations similar to humans. This paper proposes an alternative: a motivation system inspired by Maslow’s hierarchy of needs, enabling the agent to autonomously generate and select activities based on desires like social interaction, self-care, and personal fulfilment. The D2A framework uses a system of 11 dimensions representing different human desires. The agent evaluates its current state and intrinsic motivations to decide on activities that align with its desires, generating more coherent and varied behaviours compared to existing models. The study uses Concordia, a text-based simulator, where a Game Master provides the environmental context for D2A to interact in a simulated household environment. The results suggest that D2A successfully simulates human-like activities by aligning with intrinsic motivations, which opens new avenues for developing desire-driven agents in various applications."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There are a few weaknesses.\nIn Section 3, Problem Formulation, the math is not very clear, and it is also not explained in more detail how the activity distribution could be generated.\n\nIn Section 6.3.1, Naturalness, Coherence and Plausibility are used to evaluate the activity sequences, but these three dimensions seem to have been picked arbitrarily and I am not sure if they are enough to rigorously test the outputs.\n\nEvaluation is done using GPT-4o but how are we to ensure that these evaluations can be taken at face value. I think the paper would do well to do a human verification of these evaluations and how reliable they are."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "* My foremost question would be about the practical use of generating human daily activity data. I do not know this subject, and it might be the focus of an entire subcommunity which I am not familiar with. It would be important for the authors to elaborate on this point, and I am ready to reconsider my score if the demonstration is convincing;\n* What do the statistics of the generated activities look like?\n* Is there a list of predetermined activities one may do? (predefined action space?)\n\n## Notes\n\n* I think that the naturalness of activities might also come from the fact that desires are satisfied for quite some time giving the agent the opportunity to concentrate on more diverse activities than in the baselines.\n\n## Suggestions\n\n* I believe this paper should cite Park et. al 2023 (https://arxiv.org/abs/2304.03442), a seminal work in human behavior simulation, and Colas et. al. 2023, which presents an intrinsically-motivated agent operating on other principles than Maslow's hierarchy (https://arxiv.org/abs/2305.12487)."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "## Originality\n\nThe paper is quite original, as I have not seen Maslow's hierarchy of needs used in the context of a text agent.\n\n## Quality\n\nThe paper is well-written, experiments are quite well-designed, several seeds are provided to account for variability. The figures are nice, and the main one does a good job of summarizing how the agent works. The results of the paper support the claims made in the introduction and the abstract.\n\n## Clarity\n\nThe paper was easy to follow and the points are clear and easy to grasp.\n\n## Significance\n\nThe paper demonstrates a recipe for creating more realistic human daily trajectory activities. I can see the type of agent developed here being useful for other applications, such as creating LLM-based non-playable characters in video games."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper is concerned with simulating realistic trajectories of human activities in household environments. The authors introduce a text-based home environment as well as an LLM-based agent whose outputs aim to be activity traces as diverse as possible.\n\nInspired by Maslow's theory of needs, their agent incorporates 11 desire scalars (with each a target value, variable across agents). These desires are split according to the levels of Maslow's hierarchy. The desired levels, current levels, previous activities, and other environment information is provided to the LLM-agent in a tree-of-thought framework to generate the next activity.\n\nThe authors find the generated trajectories are deemed more likely by judge LLMs, and decrease dissatisfaction compared to other LLM-agent baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* I am not completely convinced of the end-goal of this paper, specifically, building sequences of human activities. I see the authors justifying this goal in the potential for generating data for psychological, economic or sociological academic study. However, the validity of the generated behavior with respect to at least one downstream application is not investigated in the paper. How to make sure the data generated is useful in these contexts?\n* The introduction also briefly argues that building agents with behaviors aligning with human ones will guarantee their intelligence (the Turing test argument). But unfortunately this does not seem to be the case; I cannot see how giving agents simulated desires will make them score higher on GSM8K for instance.\n* A human-judge evaluation of the validity of the AI judge would be nice (although I am still pretty convinced of the comparison results)\n* I think there is a methodological flaw in the design of the human reference agent: the human is (as far as I understood) given a list of the desires, and the criteria is whether the agents are able to minimize dissatisfaction for those same desires. A better reference would be a dataset of real human activities on a stay-at-home day;"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "See the Weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The proposed framework allows agents to operate based on intrinsic motivations, which is a significant departure from existing task-oriented AI agents that rely on explicit instructions or external rewards. \n- The paper is well written and has nice figures.\n- The authors conducted a comprehensive comparative analysis with three baseline approaches (ReAct, BabyAGI, and LLMob) to evaluate the effectiveness of their framework. The development of a flexible text-based activity simulator using Concordia components is another strength of the paper."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a Desire-driven Autonomy framework for LLM-based agents to simulate human-like daily activities. The framework is inspired by Maslow's theory of needs and includes 11 dimensions of human-like desires. The Desire-driven Autonomous Agent (D2A) operates based on intrinsic motivations, autonomously proposing and selecting tasks that fulfill its motivational framework. The authors developed a flexible text-based activity simulator using Concordia components, supporting various agent types and textual environments for reliable interaction and evaluation. They conducted simulations in a detailed textual home environment with a Game Master providing relevant observations. The experiments demonstrated that D2A generates appropriate activities effectively and efficiently, achieving described desires. A comparative analysis with three baseline approaches (ReAct, BabyAGI, and LLMob) showed that D2A generates more natural, coherent, and plausible daily activities."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I think there are some weaknesses in this paper:\n- Limited Technological Innovation. The paper primarily focuses on the conceptual framework and theoretical underpinnings of the desire-driven autonomy approach. While the idea of using intrinsic motivations inspired by Maslow's theory of needs is innovative, the technological implementation details might not be as groundbreaking or novel within the recent advancements in the field of AI and LLMs. The work seems in the flow of LLM agents, while I think it is more of a LLM project rather than a technologically-solid paper.\n- Although the paper demonstrates the effectiveness of the D2A agent in a specific textual home environment, there might be questions about its generalization and scalability to other environments or domains. \n- The authors have established a set of concepts, such as human needs, desires, characteristics, and values, to guide the model's behavior. Although I understand the authors' intent to direct the generation of human-like daily activities, I am uncertain about the definition and composition of these intermediate variables. They are determined by the authors without sufficient psychological support or experimental validation to substantiate the overall design. How is the overall pipeline designed? I.e., for the five-level Maslow model, why you further define 11 desire dimensions?\n- The experiment is limited. The testing environment is confined to a single room containing a kitchen, living area, bedroom, and bathroom. It is unclear whether there is any randomness or variation between each epoch's setup, such as the rooms in the house or the items within the room. The experiments should be conducted in a wider variety of settings to minimize the impact of environmental bias and to get general ideas. How many different scenes or settings are included in the experiment? Also, for LLMs, have the authors studied how the prompt design will influence the overall results?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please answer the issues mentioned in the weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "**1. Proactive Action Based on Intrinsic Motivation**: The proposed D2A framework demonstrates an ability to proactively initiate actions driven by intrinsic motivations. Through the integration of a value system and a desire-driven planner, the framework establishes a dynamic interaction between desires and actions, wherein lowered desire values trigger corresponding activities to restore balance. This mechanism, though relatively simple, allows the agent to engage autonomously in daily activities in a manner that mirrors proactive human behavior, setting it apart from purely reactive or task-driven models.\n\n**2. Human-Inspired Intrinsic Motivations Across Life Dimensions**: Unlike recent approaches that focus on intrinsic motivations for exploration or collaboration, the D2A framework offers a multi-dimensional model inspired by human needs. By integrating eleven desire dimensions (e.g., physiological, social, and self-fulfillment needs), D2A provides a broader, more human-like motivational structure. This approach goes beyond typical reward-driven or exploratory motivations by simulating daily life activities that dynamically balance internal desires, reflecting human motivation patterns more authentically. This novelty enhances the agent's potential for replicating realistic, human-inspired behaviors within single-agent environments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces the Desire-driven Autonomous Agent (D2A), a framework for simulating human-like daily activities based on intrinsic motivations rather than explicit tasks or external rewards. Inspired by Maslow's Theory of Needs, D2A prioritizes actions that fulfill a hierarchy of desires (e.g., physiological, social, self-actualization), allowing it to autonomously select actions that align with its motivational framework. This desire-driven approach contrasts with traditional task-oriented agents by focusing on fulfilling internal motivations, which provides the agent with the capacity for more adaptable and human-like behavior."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**1. Inconsistent Application of Maslow’s Theory**: While D2A is presented as being inspired by Maslow’s Theory of Needs, it does not implement the theory’s hierarchical structure. According to Maslow, higher-level desires are pursued only once lower-level needs are met, but D2A treats each desire independently, allowing the agent to pursue higher-level needs without satisfying foundational ones. This weakens the theoretical foundation and could make the agent's behavior feel less authentically human-like.\n\n**2. Overly Simplistic Desire-Action Dynamics**: Despite its innovative multi-dimensional motivational structure, the D2A framework uses a straightforward linear deduction of desire values to simulate the fluctuation of needs. This approach falls short of capturing the organic variation in human desires, which often intensify or wane in response to context, time, or recent actions. By exploring more complex dynamics—such as non-linear decay, situational adjustments, or time-of-day cycles—the framework could better reflect realistic interactions between desires and actions. The current simplicity reduces the authenticity of the agent's behavior, limiting the depth of its desire-driven model.\n\n**3. Narrow Focus on Physiological Desires in Single-Agent Setting**: The experimental results in the paper primarily emphasize physiological desires, with minimal exploration of higher-level motivations such as social connectivity or self-actualization. Additionally, the study tests only one agent, limiting insights into potential interactions or complex social behaviors that might arise in multi-agent settings. These restrictions make the results narrowly focused and reduce the paper's ability to demonstrate the full range of behaviors that D2A could potentially simulate, ultimately constraining the framework’s demonstrated impact.\n\n**4. Opaque GPT-4o Evaluation Methodology**: The evaluation of human-likeness using GPT-4o lacks transparency. Key details—such as the prompts used, scoring consistency, and validation of the assessment criteria—are not provided, making it difficult to gauge the robustness of the results. This lack of methodological clarity limits confidence in whether the evaluation effectively captures nuanced human-like behavior or merely reflects surface-level patterns.\n\n**5. Limited Qualitative Evaluation of Generated Action Sequences**: The paper lacks an in-depth qualitative analysis of the action sequences generated by D2A, making it difficult to assess the framework’s true effectiveness. The brief, simplistic sequence shown in Appendix O does not illustrate the model’s potential for creating complex, dynamic routines, leaving the impact of D2A’s design unclear. More detailed, varied sample sequences, along with thoughtful discussion, would provide stronger evidence of D2A's capabilities in simulating realistic human-like behaviors."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A desire-driven autonomy framework to guide an agent to simulate human-like daily activities"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024simulating,\ntitle={Simulating Human-like Daily Activities with Desire-driven Autonomy},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3ms8EQY7f8},\nnote={under review}\n}"
},
"abstract": {
"value": "Existing task-oriented AI agents often depend on explicit instructions or external rewards, limiting their ability to be driven by intrinsic motivations like humans. In this paper, we present a desire-driven autonomy framework to guide a Large Language Model based (LLM-based) agent to simulate human-like daily activities. In contrast to previous agents, our Desire-driven Autonomous Agent (D2A) operates on the principle of intrinsic desire, allowing it to propose and select tasks that fulfill its motivational framework autonomously. Inspired by the Theory of Needs from Maslow. A.H., the motivational framework incorporates an understanding of human-like desires, such as the need for social interaction, personal fulfillment, and self-care. Utilizing a desire-driven task generation mechanism, the agent evaluates its current state and takes a sequence of activities aligned with its intrinsic motivations. Through simulations, we demonstrate that our Desire-driven Autonomous Agent (D2A) generates coherent, contextually relevant daily activities while exhibiting variability and adaptability similar to human behavior. A comparative analysis with other LLM-based frameworks demonstrates that our approach significantly enhances the rationality of the simulated activities."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"desire;autonomy;daily activities;"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a0643fadb5187142e3a64870cef981e0b8314a56.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to robotics, autonomy, planning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Simulating Human-like Daily Activities with Desire-driven Autonomy"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3n4RY25UWP | An Information Criterion for Controlled Disentanglement of Multimodal Data | main | Active | Multimodal Representation Learning;Disentanglement;Self-Supervised Learning;Information Theory | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 3;5;6;8 | 3;4;3;2 | 2;2;4;4 | 2;2;4;3 | 2;2;3;3 | 5.5 | 3 | 3 | 2.75 | 2.5 | -0.588348 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please check the weakness section."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper presents a rigorous, information-theoretic framework for disentangling data for different domains and modalities under conditions where MNI is unattainable.\n\n2. Based on the theoretical analysis, the authors designed a two-step training algorithm, and provided an optimality guarantee for this proposed method theoretically."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a self-supervised learning approach to disentangle shared and modality-specific information in multimodal data. The authors also explain that the optimal case in prior work is not doable based on Minimum Necessary Information (MNI), as the comprehensive analysis of the optimality. The experiments on vision-language and molecule-phenotype retrieval tasks demonstrate its effectiveness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Weakness:\nFor the experiments on synthetic data, they only contain the data for unattainable MNI. The synthetic experiments for attainable MNI are also essential here to demonstrate the reliability of the theoretical bound. My suggestion is to add Gaussian noise produced by different variances as the degrees of MNI attainable or unattainable.\n\n\n1. In Figure 4 (b), the superiority of DISENTANGLEDSSL is not significant compared to other baselines, such as JointOPT. The hyperparameter front of JointOPT is closer to the front of DISENTANGLEDSSL.\n\n2. Table 1 shows the three versions of DISENTANGLEDSSL are unstable. For example, DISENTANGLEDSSL (shared) can achieve promising performance on MOSI while failing on MUSTARD. It would be better to explain why such a phenomenon exists, probably focusing on the main difference between these three versions and properties of multi-modal data. \n\n3. The performance gain shown in Table 2 is quite limited. Could the author explain if small improvements might be important in this domain? In addition, it would be better if the authors could provide the reason for this tiny improvement. \n\n4. The paper does not satisfy the page limitation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Although in most cases DisentangledSSL-both achieves highest performance, there are also several tasks where either DisentangledSSL-shared or DisentangledSSL-specific alone achieves top performance (and sometimes by quite a large margin over DisentangledSSL-both, such as Mustard). Do you observe any trend / suggest any guidelines for when it is best to use each configuration?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "1. The paper introduces a new perspective in how to perform effective multimodal representation learning. Specifically, the paper demonstrated that disentanglement of modality-specific and shared information into 2 separate representations can be effective in improving downstream task performance, which seems to be original and novel.\n\n2. The authors provided extensive theoretical justifications and proofs for their proposed approach.\n\n3. The experiments are quite comprehensive. They include both synthetic and real-world datasets, and the proposed method is evaluated against several multimodal SSL baselines on different tasks from drastically different domains. The proposed method achieved top performance in almost all tasks.\n\n4. All experiment details are presented in the Appendix (high reproducibility)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors proposed a novel self-supervised representation learning method for multimodal representation learning. The proposed method disentangles the representation of multimodal information into shared information and modality-specific information, and uses a separate neural network to learn each representation. The authors supported their proposed method through extensive theoretical analysis and proofs. The proposed method was evaluated on both synthetic datasets and real-world multimodal datasets, where the proposed method achieved top performance over baselines in most tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While the extensive use of information-theory notations allowed rigorous proof of the proposed approach, it isn't really easy for readers who isn't interested in the proofs and just want to learn about the concrete algorithm/methodology. Perhaps the authors should consider including a Pseudocode/Alg block either in the main text or in Appendix to clearly demonstrate the training process."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "n.a."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please respond to the weaknesses listed above."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "It makes sense to learn disentangled shared and modality-specific representations of multimodal data. \n\nThe theoretical analysis to evaluate the quality of disentanglement, which can be applied even in cases where Minimum Necessary Information (MNI) is unattainable, is provided.\n\nThe efficacy of DISENTANGLEDSSL is demonstrated across a range of synthetic and real-world multimodal datasets and tasks, including prediction tasks for vision-language data and molecule-phenotype retrieval tasks for biological data."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors believe that \"shared information between modalities is precisely what is relevant for downstream tasks\" and \"the modality gap ... results in misalignment between modalities, restricting the application of these methods in numerous real-world multimodal scenarios.\" Hence they motivate \"the need for a disentangled representation space that captures both shared and modality-specific information\". To address this need, they propose DISENTANGLEDSSL, an information-theoretic framework designed to learn disentangled shared and modality-specific representations of multimodal data. The authors have conducted a theoretical analysis to evaluate the quality of disentanglement, which can be applied even in cases where Minimum Necessary Information (MNI) is unattainable. The efficacy of DISENTANGLEDSSL is demonstrated across a range of synthetic and real-world multimodal datasets and tasks, including prediction tasks for vision-language data and molecule-phenotype retrieval tasks for biological data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Some of the key assertions motivating this work are inaccurate, and the proposed graphical model is flawed, which together raise questions about the validity of the resulting framework.\n\nFor instance, the statement \"shared information between modalities is precisely what is relevant for downstream tasks\" is not universally accurate. What is relevant depends heavily on the specific downstream tasks. Furthermore, \"shared information\" is a vague term, especially when applied across modalities, making it problematic in this context.\n\nThe assertion that \"the modality gap ... results in misalignment between modalities, restricting the application of these methods in numerous real-world multimodal scenarios\" is also oversimplified. The modality gap is not the sole or primary cause of misalignment; other factors such as alignment methods, distributional shifts, domain shifts, sampling biases, and more can contribute significantly.\n\nI suggest the authors to refine their motivation to more accurately reflect the complexities of multimodal relationships, while still maintaining the core idea of disentangling shared and modality-specific information.\n\nThe graphical model in Figure 2, intended to represent the generative process (as claimed in line 128), has two major issues. First, the authors have conflated the generative and inference processes. Only the pathway from Z to X represents the generative model, while the path from X to \\hat{Z} corresponds to inference, which should not be included in the data-generating graphical model. This issue is straightforward to resolve by removing the inference process from the model, which would not affect the algorithm or results, or to create separate diagrams, one for the generative process and the other for inference to clarify.\n\nThe second issue is more severe: the assumption of a shared latent variable Z_c existing between two modalities may not hold. This assumption lacks foundation, as it is more likely that the relationship takes the form of two variables Z_c^1 and Z_c^2, with potential connections between them: (1) Z_c^1 -> Z_c^2, (2) Z_c^1 <- Z_c^2, or (3) or Z_c^1 <-C-> Z_c^2, depending on the modalities involved. This discrepancy significantly undermines the justification for the proposed algorithm. I encourage the authors to discuss the implications of using separate but potentially connected latent variables (Z_c^1 and Z_c^2) instead of a single shared Z_c, and how might this change affect the proposed algorithm and results. This could lead to a more nuanced and realistic model of multimodal relationships."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Are there any alternative approaches for implementing the $I(Z^1;\\hat{Z}_{c}^{1*})$ in $L_s^1$? If so, how does their performance compare to that of the orthogonal loss implementation?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "1.The idea of applying conditional entropy bottleneck (CEB) in multi-modal self-supervised learning is novel.\n\n2.Comprehensive theoretical analysis is conducted to prove the optimality of both modality-specific and modality-shared information.\n\n3.DisentangledSSL demonstrates superior performance across tasks, including vision-language prediction and molecule-phenotype retrieval."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses the disentanglement of modality-shared and modality-specific information in multi-modal self-supervised learning and proposes DisentangledSSL. From an information-theoretic perspective, the modality-shared information is optimized using a conditional entropy bottleneck (CEB). Correspondingly, the authors formulate an optimization problem to isolate modality-specific information. Theoretical analysis of the optimality of each disentangled representation, particularly when Minimum Necessary Information is unattainable, along with experimental results, demonstrates the superiority of DisentangledSSL."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.How does the optimization objective for the modality-shared representation change from Eq. (1) to Eq. (3)? Why is $I(X^1;X^2)$ omitted?\n\n2.Related studies, such as CoCoNet [1] and SimMMDG [2], which also address the separation of modality-shared and modality-specific information, are recommended for inclusion and comparison. Note that SimMMDG [2] can be easily adapted to a self-supervised setting by substituting supervised contrastive learning with the original self-supervised counterpart.\n\n3.Ablation studies on the impact of $\\beta$ and $\\lambda$ in the MultiBench datasets should be provided to demonstrate the importance of separating modality-specific and modality-shared information in real-world applications. \n\n4.Some typos.\n1)$Z^2$ should be $X^2$ in Eq.(6) and in the equation below Eq.(6).\n2)$I(X^1;Z^1)$ should be $I(X^1;X^2)$ in Figure 3.\n \n[1] Li J, Qiang W, Zheng C, et al. Modeling multiple views via implicitly preserving global consistency and local complementarity. TKDE, 2022.\n[2] Dong H, Nejjar I, Sun H, et al. SimMMDG: A simple and effective framework for multi-modal domain generalization. NIPS, 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose DisentangledSSL to separate modality-specific information from shared information in multimodal data, providing theoretical guarantees and strong empirical performance, especially when Minimum Necessary Information is unattainable."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024an,\ntitle={An Information Criterion for Controlled Disentanglement of Multimodal Data},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3n4RY25UWP},\nnote={under review}\n}"
},
"abstract": {
"value": "Multimodal representation learning seeks to relate and decompose information inherent in multiple modalities. By disentangling modality-specific information from information that is shared across modalities, we can improve interpretability and robustness and enable downstream tasks such as the generation of counterfactual outcomes. Separating the two types of information is challenging since they are often deeply entangled in many real-world applications. We propose $\\textbf{Disentangled}$ $\\textbf{S}$elf-$\\textbf{S}$upervised $\\textbf{L}$earning (DisentangledSSL), a novel self-supervised approach for learning disentangled representations. We present a comprehensive analysis of the optimality of each disentangled representation, particularly focusing on the scenario not covered in prior work where the so-called $\\textit{Minimum Necessary Information}$ (MNI) point is not attainable. We demonstrate that \\algo successfully learns shared and modality-specific features on multiple synthetic and real-world datasets and consistently outperforms baselines on various downstream tasks, including prediction tasks for vision-language data, as well as molecule-phenotype retrieval tasks for biological data."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Multimodal Representation Learning",
"Disentanglement",
"Self-Supervised Learning",
"Information Theory"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/1cbbdc92eae7216190180703b0608ff22b851ac2.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "An Information Criterion for Controlled Disentanglement of Multimodal Data"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3n6DYH3cIP | Extendable and Iterative Structure Learning for Bayesian Networks | main | Active | structure learning;Bayesian networks;iterative | probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.) | 3;5;6;6;8 | 2;3;3;3;3 | 2;3;3;3;3 | 2;3;3;2;3 | 2;3;3;2;3 | 5.6 | 2.8 | 2.8 | 2.6 | 2.6 | 0.800095 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- How the score-based and constraint-based methods are used in the extendable PC algorithm?\n- Are the more existing work that incrementally learn the structure? What are their strengths and weaknesses?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "+ structure learning is a challenging problem due to exponentially large search space, and the paper makes progress in speeding up the search without sacrificing accuracy.\n+ the proof is solid and the presentation is clear.\n+ strong experimental results in accuracy and running time."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes an iterative/incremental algorithm to learn the structure of a Bayeisan network from observed data.\nThe algorithm added one variable at a time, and modify the previously learn structure accordingly.\nThe novelty is to prove that adding one variable only leads to deletion of previous edges, and therefore trim the search space of possible networks.\n\nIn particular, Lemma 1 shows that if the new variable Y satisfies some properties, a certain kind of edges in the old graph can be safely deleted. The paper introduced constrained-based and score-based approaches to utilize this lemma. The results is a reduction of number of CI tests.\n\nExperiments on 13 datasets shows that the running time is significantly reduced without compromising accuracy of the learned structure."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The scope of the paper (symbolic Baysian network) may not fit ICLR conference (representation learning).\n- Insufficient discussion of related work. There shall be a large number of related work on incremental structure learning, while the submission only cite two most relevant ones ((Kocacoban & Cussens, 2019) and (Alcobe, 2005)). This makes the contribution of the submission less clear."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Is the Extendable P-map learner in Algorithm 3 implemented using Algorithm 1?\n2. What is Sepset(X,Y)?\n3. An intuitive explanation for why the iterative learning algorithm outperforms PC algorithm while learning the complete graph is recommended.\n4. How is this work positioned relative to the prior works that study the information-theoretic limits of Bayesian structure learning?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The problem studied in the paper is well-motivated and will be of interest to ML community. Recycling the existing information when new variables are added or revealed provides a novel perspective to the structure learning problem. The iterative structure learning algorithm is a valuable contribution. Experiments convincingly demonstrate the computational efficiency of the proposed approach."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper provides a novel perspective to the Bayesian structure learning problem with an efficient mechanism to add new variables to an underlying graphical structure. Specifically, the learning strategy hinges on efficiently incorporating a new variable $Y$ into an existing Bayesian network ${\\cal G}$ over the set of variables ${\\cal X}$, which results in an. updated Bayesian network $\\bar {\\cal G}$ on the augmented set of variables ${\\cal X} \\cup Y$. This learning strategy is further extended to provide a novel learning paradigm for structure learning for Bayesian networks. Experiments demonstrate significant computational efficiency over existing state-of-the-art algorithms."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I found the discussions and organization of Section 3 to be more convoluted than necessary. In particular, it would be helpful to have the relevance of Algorithms 2-4 explictly elucidated."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I have the following comments about the paper:\n\n1. Performance guarantees such as error analysis, or on how to check whether the algorithm has converged to the correct graph structure are not discussed. \nA discussion on how to verify the correctness of the P-map finder algorithm used should be added. If the iterative approach is used, how does the error accumulate?\nAre the conditions in Lemma 1 and 2 sufficient for the P-map finder algorithm output the true structure?\n\n\n2. The key assumption that when a new variable $Y$ is added to the existing set $X$, that no new edges get assigned between the elements of $X$, this assumption needs further explanation. This seems to be a necessary condition of the algorithm to have low computational cost. In many situations, the introduction of a new variable might introduce new dependencies between existing nodes, e.g., in root-cause analysis, causal learning, molecular prediction, and others. Also, such situations could occur in time evolving DAGs. Further discussion on these will illustrate the applicability of the proposed method to different problems. \n\n\n3. Minor Comment:\ni. In abstract, algorithm named PC might not be known to general readers. Similarly, FCI. Some of these acronyms are not defined."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Strengths:\n1. Extendable structure learning for Bayesian networks is studied and two new approaches are proposed. \n2. The new approaches achieve much lower runtime than relearning the graphs without prior structure. \n3. Numerical results are presented on many graph datasets showing significant speedup."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies the structure learning problem in Bayesian networks, i.e.. learning a directed acyclic graph (DAG) which defines the conditional probability distribution over the given variables. Particularly, an extendable structure learning strategy is proposed to update an existing Bayesian network graph efficiently, when a new variable is introduced. Two approaches (constraint-based and score-based) are discussed for extendable structure learning, which leverage the previous graph structure, and have much lower computational cost compared to relearning the graph. It is then shown that these procedures can be used to design an iterative algorithm for structure learning. Numerical results on many graph datasets illustrate the performance of the proposed method, and shows significant speedup over the approach where the graph is learned from scratch."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Weakness:\n1. Performance guarantees for the proposed method are not presented.\n2. Details about a key assumption can be further discussed."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Is there any bound on the total number of CI tests required for running Algorithm 3?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper is well organized. The notations and definitions are mostly self contained.\n- The authors provide experiments on multiple datasets, and show that their approach save significant runtime because of the fewer number of CI tests required in the proposed iterative method. \n- The proposed method seems straightforward and easy to implement in practice."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes an extendable stracture learning method for Bayesian networks, which updates an existing network by adding new variables. The authors also propose a iterative approach for structured learning by starting with a small set and adding the remaining variables to the P-map graph. The authors run the extendable PC algorithm on multiple datasets, and show that their approach requires fewer number of CI tests compared to the original approaches."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I am not an expert on this line of literature. My main concern is that the paper seems very heuristic with no formal guarantees: \n- While the experimental results look good, I wonder if the proposed method of extendable PC has any consistency, faithfulness, or optimality guarantee. \n- The results in Table 2 - 3 suggest that extendable PC always has a better runtime with fewer number of CI tests compared to PC. Can that be proved?\n- The result in Table 5 shows that the proposed iterative PC does not always require fewer CI tests compared to PC. Under what statistical or topological conditions will that happen? In my opinion this is be the bigger risk, because without any theoretic characterization, this shows the possibility that the proposed approach does not generalize."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "n/a"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1) A simple iterative type algorithm for learning bayesian networks, which is computationally efficient. \n2) Insights on unidentifiability and its relationship to faithfulness of the graph. \n3) Backed up by theoretical claims"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents an efficient method for updating Bayesian network structures as new variables are introduced, eliminating the need for retraining from scratch. The approach reduces computational costs by up to 1300x without sacrificing accuracy. The authors also propose an iterative strategy that builds the network iteratively, thereby offering runtime benefits comparable to common algorithms like PC while maintaining accuracy. This scalable approach is well-suited for real-time and high-dimensional applications."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "n/a"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024extendable,\ntitle={Extendable and Iterative Structure Learning for Bayesian Networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3n6DYH3cIP},\nnote={under review}\n}"
},
"abstract": {
"value": "Learning the structure of Bayesian networks is a fundamental yet computationally intensive task, especially as the number of variables grows. Traditional algorithms require retraining from scratch when new variables are introduced, making them impractical for dynamic or large-scale applications. In this paper, we propose an extendable structure learning strategy that efficiently incorporates a new variable $Y$ into an existing Bayesian network graph $\\mathcal{G}$ over variables $\\mathcal{X}$, resulting in an updated P-map graph $\\bar{\\mathcal{G}}$ on $\\bar{\\mathcal{X}} = \\mathcal{X} \\cup \\{Y\\}$. By leveraging the information encoded in $\\mathcal{G}$, our method significantly reduces computational overhead compared to learning $\\bar{\\mathcal{G}}$ from scratch. Empirical evaluations demonstrate runtime reductions of up to 1300x without compromising accuracy. Building on this approach, we introduce a novel iterative paradigm for structure learning over $\\mathcal{X}$. Starting with a small subset $\\mathcal{U} \\subset \\mathcal{X}$, we iteratively add the remaining variables using our extendable algorithms to construct a P-map graph over the full set. This method offers runtime advantages comparable to common algorithms like PC while maintaining similar accuracy. Our contributions provide a scalable solution for Bayesian network structure learning, enabling efficient model updates in real-time and high-dimensional settings."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"structure learning",
"Bayesian networks",
"iterative"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/0524315b7e6ebf64389c11e1270a208f362fc639.pdf"
},
"presentation": null,
"primary_area": {
"value": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Extendable and Iterative Structure Learning for Bayesian Networks"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3nkIRKh3Sk | AVSS: a new benchmark for airport video semantic segmentation | main | Active | Airport Ground;Semantic Segmentation;Video Surveillance | datasets and benchmarks | 3;5;5;6 | 2;3;5;4 | 2;3;3;3 | 2;2;2;3 | 2;3;3;3 | 4.75 | 3.5 | 2.75 | 2.25 | 2.75 | 0.718185 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "There is a natural hierarchy among some of the classes, such as Building, Terminal (subtype of Building), Tower (subtype of Building). Did you consider defining an explicit class taxonomy rather than just a flat list of classes? A taxonomy would enable natural expansion to additional classes and potentially resolve ambiguities such as incorrectly declaring a false alarm when a Terminal is labeled as a Building.\n\nDuring data collection, why were the videos so short, 15 sec? Since the cameras are fixed, it seems straightforward to collect long videos e.g. hours in order to capture a diversity of long-range airport activities.\n\nHow many airports are in the dataset? How many unique camera views? Many important details are missing.\n\nWhat proportion of the dataset was used for fine-tuning vs. testing? Were less-frequent classes handled differently than more common ones?\n\nThe IOU results on Person are the lowest of any any category, despite the maturity of person detectors. Why? Is it the small pixel size of people in this dataset? What would happen if the images were up-sampled 2X or 4X?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Demonstrating the low accuracies of popular segmentation techniques on a real-world problem is a significant contribution. The challenges of this dataset are also found in many surveillance datasets, hence addressing them through research on this dataset could have impact in many surveillance domains.\n\nThe distribution of annotations and pixels by class is useful to see, Fig. 5. The dataset seems to follow a long-tail distribution which is very common in real-world settings like this, whereas more contrived datasets often have a relatively uniform distribution.\n\nThe annotation compactness metric is well formulated, based on a standard spatial moment calculation. It would be interesting to see a comparison to other surveillance segmentation datasets.\n\nThe evaluation is quite thorough, testing a variety of current approaches for both image and video segmentation. The low accuracies of all of the algorithms indicate the difficulty of the problem and the utility of the dataset."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a new dataset consisting of 250 short videos of outdoors airport scenes, with segmentation annotations of 18 categories on all frames. A variety of recent image and video segmentation methods are fine-tuned and tested on the dataset, yielding relatively low accuracies on most categories. The dataset is characterized by various analytics and a few metrics."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There is essentially no stated motivation for semantic segmentation at airports. The one mention of this, in the first paragraph is very vague. It would be more convincing to enumerate a set of use cases motivating the need for accurate segmentation rather than bounding boxes.\n\nThe Intro should clarify that only outdoor scenes are included in the dataset, since there are numerous video surveillance datasets that include indoor scenes are airports and other large facilities.\n\nDownselecting from 5000 collected videos to 250 included videos is a huge reduction. Was this primarily motivated by the cost of creating ground-truth segmentations? The methods used for data selection are not described.\n\nThe data annotation section describes the process for annotating one image, but does not mention how video is annotated. In video from a fixed camera, a single annotation of a fixed object e.g. a Building should be transformable to subsequent video frames without editing. Was this method used? Even for movers, the annotation on the previous frame can be copied to the current frame and adjusted, greatly reducing effort and inter-frame annotation variability.\n\nCreating segmentation annotations manually is expensive on images, even more so on video as performed here. The dataset is much smaller than VSPW in terms of number of videos and classes, partly because of the narrower problem domain.\nImage coherence, Eq. 1, does not seem to be an advantage nor disadvantage. With a moving camera, image coherence would be very low, for example. A large number of movers would yield low coherence. Similarly, label coherence is a function of camera and object movement, not just label spatial consistency across frames. The purpose of these measures, and the comparison to other datasets, is very unclear and not well motivated. I’d suggest removing this section, or, more significantly, reformulating these metrics to measure how consistent an object label remains across video frames.\n\nThe topic of this paper is rather specific and seems more appropriate for a computer vision venue such as WACV, or AVSS (the conference with the same acronym as the dataset)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"Yes, Privacy, security and safety"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please refer to the weakness part."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. It provides insights into potential issues in airport semantic segmentation, such as extreme multi-scale, intra-class diversity, and inter-class similarity.\n2. The proposed benchmark has been evaluated against various state-of-the-art (SOTA) models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a novel benchmark for Airport video semantic segmentation and introduces a semantic segmentation algorithm based on a 3D airplane model. The authors identify key challenges in airport scenes, including extreme multi-scale variation, intra-class diversity, and inter-class similarity. Addressing these issues, they propose a benchmark that is evaluated from various perspectives. First, they conduct a statistical analysis by measuring class distribution, inter-frame coherence, and compactness. By applying the proposed method to various models, the authors demonstrate the increased difficulty of their benchmark compared to existing public datasets. To further assess generalizability, they train models on the proposed dataset and compare performance with the VSPW dataset. To this end, the authors present a 3D airplane model-based algorithm tailored specifically for airport segmentation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. It would be helpful to specify the diversity principles in “Data collection” in detail.\n\n2. A reference to AnyLabeling in the “Data annotation” should be included.\n\n3. The analysis of whether the proposed benchmark can cover intra-class diversity, a key challenge in airport semantic segmentation, is insufficient. It would be beneficial to examine various aspects, such as color distribution and feature distribution within the same category, to provide a more comprehensive analysis.\n\n4. There is a need to analyze whether the proposed benchmark addresses inter-class similarity.\n\n5. I have doubts about the actual relationship between compactness and segmentation difficulty. For example, in Table 2, while there is a large gap in results between \"Runway\" and \"Liaison Road,\" the difference in compactness is not significant. Additionally, while there is a small gap in results between \"Runway\" and \"Person,\" the difference in compactness is large.\n\n6. For the generalizability analysis experiment, comparing similar categories from another dataset (such as Cityscape) would be more helpful.\n\n7. The author suggests that a model performing well on AVSS is likely to achieve favorable segmentation results on other datasets (page 8, lines 430-431). However, it would be useful to have an experiment to verify this claim. For example, by comparing the performance differences between the top 3 models with the highest performance and the bottom 3 models with the lowest performance on AVSS, it could provide valuable insights into whether models that perform well on AVSS have a similar tendency on other datasets.\n\n8. There are no qualitative results for the proposed 3D airplane-based algorithm for airport semantic segmentation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "No"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "No."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "In conclusion, the main contributions are as follows:\n1. this paper establish the novel AVSS, providing a benchmark for airport semantic segmentation.\n2. this paper evaluate the generalizability of AVSS and 18 SOTA segmentation algorithms on AVSS.\n3. this paper propose principles for designing airport video semantic segmentation models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, it introduces the first large-scale Airport Video Semantic Segmentation dataset (AVSS) for airport surveillance. AVSS comprises 18 common semantic categories at airports, and 250 videos, totaling over 140,000 frames with accurate manual annotations. AVSS covers a wide range of challenges for airport video surveillance, such as extreme multi-scale, intra-class diversity, inter-class\nsimilarity, etc. The authors analyze statistical information and evaluate 18 state-of-theart (SOTA) semantic segmentation algorithms on AVSS. The significant performance degradation indicates that current models are far from practical application. Furthermore, this paper discuss how to develop video semantic segmentation algorithms for airport surveillance and the generalizability of AVSS to other tasks\nand datasets. AVSS serves as a research resource for airport semantic segmentation and a robustness evaluation tool for segmentation algorithms in practical applications."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The weakness can be as follows:\n\n1. In table 1, the paper shows the coherence is higher than other datasets. But for the cohenrence metric equaiton (1) (2), it's not clear how to compute the corresponding pixels between difference frames; in addition, higher coherence means the data variety is low for a video squezze, as the AVSS datasets only has 250 videos, does it mean it has only 250 scenes with just changing people, plane etc but the background is fixed. From this view, the variety of the AVSS dataset can be low.\n2. For the 4.3 GENERALIZABILITY, the first sentence is \"test the model trained on AVSS on VSPW\", is it a typo? As the table 4 shows \"The classes IoU of SOTA models trained on AVSS, evaluated on AVSS and VSPW.\" For this experiment, it is necessary to conduct the experiment comparing the model trained on VSPW and test on AVSS and VSPW."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Refer to weakness."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "+ The dataset is based on novel airport scenes, providing a new perspective to evaluate VSS models and contributing a valuable resource for future researchers.\n+ The dataset is manually labeled, ensuring high-quality annotations.\n+ The experiments reveal the limitations of existing models on this dataset."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a new dataset for airport video semantic segmentation (AVSS) with manually labeled masks. It then benchmarks current VSS models on this dataset, highlighting a significant drop in performance when applied to AVSS and providing future insights for designing suitable VSS models for this domain."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The scale of AVSS (only 250 videos) is relatively small compared to current VSS datasets.\n- The related work section should include more recent advancements in VSS models and benchmarking should add some recent works, such as:\n - Mask propagation for efficient video semantic segmentation\n - Pay attention to target: Relation-aware temporal consistency for domain adaptive video semantic segmentation\n - Temporal-aware Hierarchical Mask Classification for Video Semantic Segmentation"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024avss,\ntitle={{AVSS}: a new benchmark for airport video semantic segmentation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3nkIRKh3Sk},\nnote={under review}\n}"
},
"abstract": {
"value": "Airport video semantic segmentation is fundamental to airport surveillance applications, yet there currently lacks a specialized benchmark and algorithms for this task. In this paper, we introduce the first large-scale Airport Video Semantic Segmentation dataset (AVSS) for airport surveillance. AVSS comprises 18 common semantic categories at airports, and 250 videos, totaling over 140,000 frames with accurate manual annotations. AVSS covers a wide range of challenges for airport video surveillance, such as extreme multi-scale, intra-class diversity, inter-class similarity, etc. We analyze statistical information and evaluate 17 state-of-the-art (SOTA) semantic segmentation algorithms on AVSS. The significant performance degradation indicates that current models are far from practical application. Furthermore, we discuss how to develop video semantic segmentation algorithms for airport surveillance and the generalizability of AVSS to other tasks and datasets. AVSS serves as a research resource for airport semantic segmentation and a robustness evaluation tool for segmentation algorithms in practical applications. AVSS is available at www.agvs-caac.com/avss/avss.html."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Airport Ground",
"Semantic Segmentation",
"Video Surveillance"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/668d8d696b379d3ab42065f7f7800acb33d0b13c.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "AVSS: a new benchmark for airport video semantic segmentation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3nwlXtQESj | Path Complex Message Passing for Molecular Property Prediction | main | Active | Molecular Property Prediction; Path Complex; Geometric Deep Learning; High-order Interaction; Low-order Interaction | applications to physical sciences (physics, chemistry, biology, etc.) | 3;5;5;6 | 5;3;5;2 | 2;2;1;3 | 2;2;1;3 | 2;2;2;2 | 4.75 | 3.75 | 2 | 2 | 2 | -0.750568 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- missing ']' in Fig. 1 dihedral term.\n- If the paper aims to emphasize the Path Weisfeiler-Lehman (PWL) capacity, it should evaluate the capacities of classical models, such as DimeNet, GemNet, and MACE. For reference, see the Geometric Weisfeiler-Lehman (WL) paper."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "It proposes path complex-based message passing (PCMP) with some detailed graph theories, and achieves good results on some molecular tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces Path Complex-based Message Passing (PCMP) and achieves promising results on molecular property prediction benchmarks. However, there are some major weakness in this paper including **lack of literature review, novelty and evaluation on current benchmarks**, and need to be further revised."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Incorporate geometric features like bond, angles, dihedrals and improper angles in the modeling is not novel, as seen in Fig.2 of [1] and Fig.2 of [2]. There are also many works already involved these many-body systems.\n- Although the paper introduces the concept of a “path complex,” the actual features used, which are bond distances and angles (Table 5), are standard. Similar methods already exist, such as hierarchical message passing for updating geometric embeddings in [2]. Also, models like NequIP, Allegro, and MACE employ **many-body expansions** in message passing, leveraging **tensor products** to incorporate higher-order geometric tensors (a path fusing process [3]), which are beyond the basic features mentioned in this paper. PCMP maybe a subset of the tensor products. A more comprehensive literature review is needed for the authors [4, 5].\n- The primary weakness is that the authors claim that they are inspired by MD force fields, but **no results on any standard MD benchmark, such as MD17, rMD17, MD22 are provided**. Not to mention the conduction of MD simulations further driven by this MLFF. Since the paper focuses on molecular 3D structures, **it’s a necessity to proof invariance or equivariance**, which are missing here. In contrast, this paper provides a list of the graph path theories, appears more relevant to topological graphs and is insufficient to address geometric graphs. Furthermore, despite claiming that the method “enables systematic exploration of connectivity and interaction for analyzing complex systems and networks,” **there are no experiments on these tasks supporting this claim**.\n- The GEM paper is relatively old and actually we do not need to generate 3D structures using RDKit from smiles for the 2D molecule datasets. Beside the datasets mentioned in the above question, **numerous 3D molecular datasets for DFT-level property prediction, such as QM9 (with 12 targets), OC20, OE62, and PCQM4Mv2, are available**. I strongly recommend evaluating PCMP on these benchmarks for a more comprehensive assessment.\n\n[1] Wang, Yusong, et al. \"Enhancing geometric representations for molecules with equivariant vector-scalar interactive message passing.\" Nature Communications 15.1 (2024): 313.\n\n[2] Pei, Hongbin, et al. \"Hago-net: Hierarchical geometric massage passing for molecular representation learning.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 13. 2024.\n\n[3] https://docs.e3nn.org/en/latest/api/o3/o3_tp.html\n\n[4] Zhang, Xuan, et al. \"Artificial intelligence for science in quantum, atomistic, and continuum systems.\" arXiv preprint arXiv:2307.08423 (2023). **Section 5.2**\n\n[5] Han, Jiaqi, et al. \"A Survey of Geometric Graph Neural Networks: Data Structures, Models and Applications.\" arXiv preprint arXiv:2403.00485 (2024)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Overall, I believe the authors present a good tool to predict molecular properties. However, the following questions should be addressed to clarify key aspects and improve rigor:\n\n1. How do you compare the accuracy of your model with the more recent models, M3GNet, MACE, or EquiformerV2?\n\n - In addition, you only compared the accuracy, how about the speed compared with other models?\n\n2. Is it possible to expand this framework to periodic systems, i.e. inorganic materials?\n\n3. **Interpretability of Path Features**: Could the authors explore or comment on how path features across different orders contribute to the final prediction? Are there plans to visualize or interpret specific paths in relation to molecular properties?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* Originality: The PCMP model presents a unique innovation in molecular property prediction by applying path complexes, which go beyond conventional graph-based representations. The incorporation of multi-order path complexes allows for capturing high-order interactions like bond angles and dihedral angles. This approach offers a fresh perspective on molecular graph representation, making PCMP stand out from other GNN-based models that primarily rely on node and edge interactions.\n* Quality: The paper provides a thorough experimental validation, comparing PCMP with a diverse set of baseline models, including both pretrained and non-pretrained GNNs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a novel approach called Path Complex Message Passing (PCMP) for molecular property prediction using geometric deep learning. Unlike traditional graph neural networks (GNNs) that operate on molecular graphs, PCMP employs path complexes that capture multi-body interactions in molecules through paths of various orders. The model’s hierarchical message-passing mechanism updates high-order paths first, followed by lower-order paths, facilitating effective feature communication between these paths. Extensive experiments on benchmark datasets demonstrate that PCMP achieves state-of-the-art results in molecular property prediction, showcasing its potential to model complex molecular interactions comprehensively."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* Clarity: I suggest the authors to hide some of the technical details in the Appendix.\n* Experimental Limitations: Although the experiments are extensive, the paper could benefit from a more diverse set of benchmarks. The current datasets focus primarily on small to medium-sized molecules, while macromolecular structures, such as proteins, are absent from the evaluation. In addition, the test of efficiency, i.e. speed of training or inferences, is missing."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Can the authors provide empirical validation on MM force field datasets to substantiate their claim of mimicking MM force fields?\n2. Are there specific path orders or features that consistently contribute more to accurate predictions? Could this be visualized or quantified?\n3. Can the model be used to capture the improper angle? Is it possible to implement this? \n4. For Figure 4, does the path complex graph alleviate the over-squashing? If so, can the author provide an empirical study on this, for example, compare the effective resistance.\n5. So the current experiments contain only validation from small molecules dataset, what about for large molecules? How does the model scale to larger molecules?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1. **Innovative Modeling of Molecular Interactions**: PCMP introduces path complexes, offering an approach to capture both local and global molecular features that go beyond traditional graph representations.\n2. **Methodological Rigor**: The hierarchical message-passing mechanism is well-detailed, showing how path complexes of different orders contribute to the molecular representation.\n3. **Thorough Ablation Studies**: The authors provide in-depth ablation studies that highlight the importance of various path orders and message-passing mechanisms, strengthening the evaluation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents **Path Complex Message Passing (PCMP)**, which introduces path complexes to model intricate chemical and non-chemical interactions for molecular property prediction. The model demonstrates promising results on molecular benchmark datasets and aims to provide a more detailed molecular representation by incorporating high-order interactions in a path complex framework."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Unsubstantiated Claim on Force Field Mimicking**: A major claim of the paper is that PCMP mimics the molecular mechanics (MM) force field, yet no benchmarks or empirical results using MM force field datasets (e.g., MD17, MD22) are provided to substantiate this claim. Without benchmarking against MM force fields, the claim appears unsupported, and this oversight detracts from the paper's validity in this area.\n \n2. **Computational Complexity**: The inclusion of path complexes, especially higher-order ones, is likely computationally demanding. However, the authors do not provide insights into potential trade-offs, such as runtime or scalability on larger datasets.\n\n3. **Lack of Equivariance in Model Design**: Given the model’s target application in molecular property prediction, its architecture does not incorporate rotational or translational equivariance, which would enhance its ability to handle spatial molecular data more robustly. Adding equivariant layers could make the model better suited to capturing geometry-sensitive properties.\n\n4. **Interpretability**: The model’s complex hierarchical structure might hinder interpretability, as it’s not clear which paths contribute most significantly to predictions or whether high-order interactions have consistent relevance across datasets. A comprehensive interpretability study is recommended."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1) What is the time and memory complexity of constructing and operating on path complexes compared to traditional graph-based methods? How does the method scale with molecule size?\n\n2) Related to above. Could you provide empirical justification for using 3-path complexes as the maximum order? What happens with higher orders?\n\n3) Could you provide a brief description of each dataset's characteristics in the main paper?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper displays technical rigor in developing its mathematical foundations, establishing formal proofs for path complex properties and their relationship to molecular structures. The proposed path complex representation is well motivated from a chemistry perspective, making explicit connections to molecular force fields and showing how different path orders correspond to physical properties (bond lengths, angles, and dihedral angles). The architecture is novel. While limited, the experimental results do show consistent improvements across multiple benchmark datasets, and the ablation studies help in distilling the contribution of different components of the model."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces Path Complex Message Passing (PCMP), a method for molecular property prediction that represents molecules using path complexes of different orders to capture various aspects of molecular structure (bond lengths, angles, and dihedral angles). The work is heavily theoretical. The method is tested on five subsets of MoleculeNet, showing improvements over baseline models. Most of the papers content is theoretical development and proofs, with a relatively brief experimental section."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper suffers from several weaknesses. In the Introduction, the authors refer to quite 'old' papers. The experimental validation is notably thin compared to the extensive theoretical development, taking up only about 2 pages of the 10-page paper. The empirical improvements, while consistent, are relatively modest and don't seem to justify the substantial complexity introduced by the method. There's inadequate discussion of computational overhead and scalability. The path complex representation likely introduces significant computational costs, but this isn't analyzed. Even though datasets are described in the appendix, authors haven't included any brief description of the dataset, or the task at hand: for example, for QM9, one cannot know against which property the authors are regressing against from the main paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024path,\ntitle={Path Complex Message Passing for Molecular Property Prediction},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3nwlXtQESj},\nnote={under review}\n}"
},
"abstract": {
"value": "Geometric deep learning (GDL) has demonstrated enormous power in molecular data analysis. However, GDL faces challenges in achieving high efficiency and expressivity in molecular representations when high-order terms of the atomic force fields are not sufficiently learned. In this work, we introduce message passing on path complexes, called the Path Complex Message Passing, for molecular prediction. Path complexes represent the geometry of paths and can model the chemical and non-chemical interactions of atoms in a molecule across various dimensions. Our model defines messages on path complexes and employs neural message passing to learn simplex features, enabling feature communication within and between different dimensions. Since messages on high-order and low-order path complexes reflect different aspects of molecular energy, they are updated sequentially according to their order. The higher the order of the path complex, the richer the information it contains, and the higher its priority during inference. It can thus characterize various types of molecular interactions specified in molecular dynamics (MD) force fields. Our model has been extensively validated on benchmark datasets and achieves state-of-the-art results.\nThe code is available at \\url{https://anonymous.4open.science/r/Path-Complex-Neural-Network-32D6}"
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Molecular Property Prediction; Path Complex; Geometric Deep Learning; High-order Interaction; Low-order Interaction"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/ce1d2e8f99cb7682646d8a3c5aa2bc1499c52534.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to physical sciences (physics, chemistry, biology, etc.)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Path Complex Message Passing for Molecular Property Prediction"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3ogIALgghF | Automatic Curriculum Expert Iteration for Reliable LLM Reasoning | main | Active | Large Language Models;Reasoning;Hallucinations;Laziness;Alignment | alignment, fairness, safety, privacy, and societal considerations | 5;6;6;8 | 4;3;5;4 | 2;3;3;3 | 3;3;2;3 | 3;3;3;4 | 6.25 | 4 | 2.75 | 2.75 | 3.25 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper is clearly motivated, the problem of hallucinations in LLM reasoning has not been explored as much.\n- The paper presents the problem, the methods and the baselines clearly.\n- The results on the three datasets, MATH, blocksworld and boardgameQA are comprehensive and provide sufficient evidence of the utility of the method.\n- The authors also present sufficient ablations of their method, with varying versions of the curriculum used."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper addresses the problem of hallucinations in reasoning for large language models. Specifically, in methods that use expert iteration for improving this reasoning in LLMs. The authors propose adding a Refusal option to the EI pipeline and rewarding the refusal when the problem has a certain level of difficulty (length of reasoning is used as a proxy for difficulty). Based on the reward that a response gets, the data for the next iteration of EI is selected accordingly. To balance refusal and improvements in reasoning, the authors propose using a curriculum to balance the two objectives. Based on measures of accuracy and refusal rate, the authors show the effectiveness of their method compared to baselines and ablations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Missing citations for central EI works: STaR (https://arxiv.org/abs/2203.14465), Training Chain-of-Thought via Latent-Variable Inference (https://arxiv.org/abs/2403.04642)\n- What other measures of difficulty are possible (very recent paper that the authors can choose to ignore since it came out after the paper deadline: https://arxiv.org/abs/2410.04707)? Can a linear probe decode difficulty? A discussion is needed.\n- Similarly, are there other mechanisms of knowing when to say I dont know possible? Is a linear probe enough for refusal? Like in Language models mostly know what they know (https://arxiv.org/abs/2207.05221).\n- Where could the current framework of length based difficulty fail? When sampling multiple times, what is the variation in the number of steps needed to reach the answer for a problem?\n- Are the number of iterations for Auto-CEI and EI etc matched? More generally, how does the amount of training data / compute required vary across the method and baselines? I think some of the implementation details could be moved to main in the revision. This would help readability.\n- How could this method be extended beyond expert iteration, to more online methods of learning like PPO or REINFORCE?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Why the process of update R is called curriculum learning? IMO, I think it is more like a search process and curriculum learning is about first learning elementary stuffs and then complex stuffs."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Using expert iteration is a good idea.\n2. The authors consider the trade off between avoiding hallucinations and avoiding laziness, which is important.\n3. This paper aims to avoiding reasoning hallucinations and teaching the model to say IDK to reasoning tasks which beyond its ability."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes an automatic curriculum expert iteration (AUTO-CEI) to enhance LLM reasoning and align responses to the model's capabilities. Enable the model to answer within its limits and decline when tasks exceed them.\nExpert iteration can explore the reasoning trajectories near the LLM policy, guiding incorrect paths back on track to reduce compounding errors and improve robustness.\nAUTO-CEI uses the length of reasoning steps to measure difficulty and designs an automatic curriculum for expert iteration that rewards correct reasoning.\nAUTO-CEI can automatically estimates the boundary of LLMs' reasoning capacity to achieve a reasonable alignment to maximize capacity and control behaviors."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Although considering the trade off between helpfulness and laziness, the paper control this trade off by hyper-parameters, instead of proposing some principles or methods to chose the optimal hyper-parameters.\n2. The evaluation metrics in the paper are incomplete; for example, IDK only measures the proportion of times the model outputs \"IDK\" without assessing the accuracy of those IDK responses.\n3. The experiments in the paper may involve unfair comparisons, as AUTO-CEI conducts multiple searches for EI, resulting in significantly more training steps and data compared to other methods."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The MATH dataset has manually labeled question difficulty; it would be helpful to include a figure illustrating the correlation between generated text length and question difficulty as a validation of this signal.\n\n2. Is the method effective across datasets with different distributions?\n\n3. There are many established metrics for measuring uncertainty with different confidence levels, such as AP used in R-tuning. The current paper only reports results for the binary case, i.e., answering or refusing to answer. I am curious about the performance of this method at different confidence levels."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper presents a reasonable motivation that previous approaches to addressing hallucination problems often suffer from overcorrection, leading to overly conservative responses from LLMs on many questions.\n\n2. The paper introduces an innovative reward mechanism that dynamically balances the choice between responding or refusing to answer based on question difficulty. Subsequent experiments demonstrate that this method effectively handles the trade-off between answering and refusing. Figure 3 also illustrates that as the generated text length increases, more questions are declined.\n\n3. The writing is clear, and the figures are well-designed, enhancing the overall readability of the paper."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper enhances LLM calibration ability in reasoning tasks by mitigating hallucinations and reducing lazy behaviors. The method uses a curriculum of expert iteration, balancing answering the question and saying 'I don't know' by adapting the reward system according to the difficulty of reasoning tasks. It shows great balancing in assertiveness and conservativeness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper’s reward uses the length of the generated text to measure the level of the exploration. This limits the method’s generalizability and scalability, as it’s challenging to find a universal text processing metric for different types of text (e.g., code, tables). Furthermore, reward parameters such as c1 and c2 also vary across datasets, which makes me concerned about the method’s effectiveness in situations where training and testing distributions are not similar. \n\n2. The experimental results show that even the best method in this paper achieves overall accuracy significantly lower than Vanilla EI. While I understand that incorporating refusal may lead to a drop in accuracy, achieving performance closer to Vanilla EI would be more compelling, as the current method shows that adding a refusal option will significantly reduce the accuracy."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weaknesses.\n\nAdditionaly,\n\n(1) Have you tested the correlation between the number of reasoning steps and difficulty, as this is a key assumption of AUTO-CEI? If not, I suggest conducting a test experiment on the MATH dataset, considering it has manually labeled difficulty tags.\n\ntypos:\nline 286 N -> K"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* This paper is well-written and presents clear ideas.\n\n* The idea of aligning responses to the model's capabilities–assertively answering within its limits and declining when tasks exceed them is novel.\n\n* The motivation behind AUTO-CEI is straightforward and strong."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes Automatic Curriculum Expert Iteration (AUTOCEI) to enhance LLM reasoning and align responses to the model's capabilities–assertively answering within its limits and declining when tasks exceed them, so as to mitigate hallucination and laziness in reasoning tasks. Through experiments on BoardgameQA, MATH and Blocksworld with Llama-3.1-8B-instruct, The authors demonstrate the effectiveness of AUTO-CEI, achieving superior alignment by effectively balancing assertiveness and conservativeness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* AUTO-CEI introduces additianal training overheads. Considering that the process of AUTO-CEI includes EXPERT ITERATION (each iteration needs large amount of resampling), the additional training overhead can not be ignored. I suggest the authors align the sampling cost between different baselines.\n\n* Limited validation across models. The effectiveness of AUTO-CEI is validated only on Llama-3.1-8B-instruct. Further exploration is needed to assess the generalizability to other models.\n\n* The performance is significantly affected by the hyperparameter λ. Table 2 shows that the performance of AUTO-CEI fluctuates greatly under different λ, which can lead to a considerable amount of additional cost during actual usage."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Auto-CEI pushes and estimates the limits of LLM reasoning capacities and aligns LLM's assertive and conservative behaviours according to these limits for more reliable reasoning."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024automatic,\ntitle={Automatic Curriculum Expert Iteration for Reliable {LLM} Reasoning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3ogIALgghF},\nnote={under review}\n}"
},
"abstract": {
"value": "Hallucinations (i.e., generating plausible but inaccurate content) and laziness (i.e. excessive refusals or defaulting to \"I don't know\") persist as major challenges in LLM reasoning. Current efforts to reduce hallucinations primarily focus on factual errors in knowledge-grounded tasks, often neglecting hallucinations related to faulty reasoning. Meanwhile, some approaches render LLMs overly conservative, limiting their problem-solving capabilities. To mitigate hallucination and laziness in reasoning tasks, we propose Automatic Curriculum Expert Iteration (Auto-CEI) to enhance LLM reasoning and align responses to the model’s capabilities--assertively answering within its limits and declining when tasks exceed them. In our method, Expert Iteration explores the reasoning trajectories near the LLM policy, guiding incorrect paths back on track to reduce compounding errors and improve robustness; it also promotes appropriate \"I don't know\" responses after sufficient reasoning attempts. The curriculum automatically adjusts rewards, incentivizing extended reasoning before acknowledging incapability, thereby pushing the limits of LLM reasoning and aligning its behaviour with these limits. We compare Auto-CEI with various SOTA baselines across logical reasoning, mathematics, and planning tasks, where Auto-CEI achieves superior alignment by effectively balancing assertiveness and conservativeness."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Large Language Models",
"Reasoning",
"Hallucinations",
"Laziness",
"Alignment"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/f1170ffa46567404483005fcf7740f374314589b.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Automatic Curriculum Expert Iteration for Reliable LLM Reasoning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3p4raemLAH | Targeted Unlearning via Single Layer Unlearning Gradient | main | Active | Machine unlearning;multi-modality;CLIP;vision-language model (VLM);stable diffusion;privacy protection;copyright protection;trustworthy and safe machine learning | alignment, fairness, safety, privacy, and societal considerations | 5;5;5;8 | 2;4;4;4 | 3;3;3;3 | 2;3;2;3 | 3;3;2;3 | 5.75 | 3.5 | 3 | 2.5 | 2.75 | 0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. in table2, Why are there no variance experiments to illustrate the stability of various metrics?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The method introduces a novel approach to targeted unlearning by updating a single targeted layer using a one-time gradient computation, which is distinct from more common methods that require iterative model updates across multiple layers.\n\n2. The paper presents two new metrics, layer importance and gradient alignment, to determine the optimal layer and gradient direction for unlearning, enhancing the targeted precision of the process.\n\n3. The experiment was sufficient for me."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes an innovative approach to the issue of machine unlearning, which involves removing the influence of specific data subsets from trained machine learning models without retraining from scratch."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Table 2: Performance overview of different unlearning methods on UnlearnCanvas. in this table, My intuition is that there is a lack of variance experiments, that is, running multiple rounds to see the best and worst performance of the algorithm."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Based on the weaknesses part, here are some corresponding suggestions:\n\n1. **Incorporate a Comprehensive Related Work Section**\n \n If it is available, add a dedicated Related Work section that reviews pertinent literature on machine unlearning and saliency-based methods. \n\n2. **Enhance Clarity in the Single Layer Update Methodology**\n \n The methodology for selecting and updating the single targeted layer is not clearly explained, potentially causing confusion among readers. Please follow the weakness part to provide more clear explanations.\n \n3. **Strengthen and Expand the Experimental Evaluation**\n \n Based on the weakness part, could you provide more numerical results on VLM task, and do more experiments under previous evaluation metrcis on image classfication task.\n\n4. **Improve Formatting and Structural Consistency**\n \n The paper's formatting, such as line spacing between titles and sections, lacks consistency, which can detract from readability and professionalism."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. **Innovative Saliency-Based Approach to Machine Unlearning**\n \n The paper introduces a novel saliency-based method specifically designed to address the machine unlearning problem. The authors present SLUG technique, which efficiently removes targeted information by updating only a single designated layer of the model through a one-time gradient computation. This method offers a streamlined solution compared to traditional unlearning techniques that often require extensive model modifications and incur high computational costs.\n\n2. **Comprehensive Validation Across Diverse Downstream Tasks**\n \n The effectiveness of the proposed SLUG method is thoroughly validated across three distinct downstream tasks, demonstrating its versatility and robustness: CLIP-Based Image Classification, Stable Diffusion-Based Image Generation and Vision-Language Models (VLM)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel saliency-based method for the machine unlearning task. The proposed approach, named Single Layer Unlearning Gradient (SLUG), effectively removes targeted information by updating only a single specific layer of the model through a one-time gradient computation. Compared to traditional unlearning techniques, SLUG significantly reduces computational costs while ensuring minimal impact on the model's performance for unrelated content.\n\nThe authors evaluate SLUG using metrics such as low computational cost, effective unlearning, and utility retention. They demonstrate the method's efficacy across three downstream tasks: CLIP Zero-Shot Classification, Generative Models on UnlearnCanvas benchmark and Vision-Language Models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Lack of Related Work Discussion**\n \n The paper does not include a comprehensive review of related work. This omission makes it difficult to contextualize the proposed method within the existing body of research and to understand how it compares to or improves upon previous approaches in machine unlearning and saliency-based methods.\n\n2. **Insufficient Clarity in Single Layer Update Methodology**\n \n The description of the **Single Layer Unlearning Gradient (SLUG)** method lacks clarity, particularly in the selection and updating of the single targeted layer. This can lead to confusion among readers regarding the following aspects:\n \n - **Balancing Equations (7) and (8)**: The paper does not adequately explain how these equations balance the unlearning process. Additional textual explanations are needed to clarify the interplay between these equations and their role in achieving effective unlearning.\n \n - **Computation of Single Gradient Direction**: The rationale behind choosing the gradient direction based on the initial parameters is not sufficiently elaborated. More detailed explanations are necessary to justify this choice and its impact on the unlearning process.\n \n - **Consistency in Parameter Updates**: Although the authors emphasize updating parameters in a single layer, this point is not clearly reiterated in Section 3.2. Ensuring consistent emphasis throughout the methodology section would enhance understanding.\n\n3. **Limited and Inadequate Experimental Evaluation**\n \n The experimental results presented in the paper are not particularly compelling, and the evaluation metrics used are insufficiently comprehensive. Specific issues include:\n \n - **Unlearning for CLIP (Section 4.2)**:\n - **Optimal Results Visualization**: The results for different learning rates are not clearly highlighted. Using color-coding to indicate the best-performing results would improve readability and interpretation.\n - **Evaluation Metrics Consistency**: The paper does not maintain consistency with established definitions for classification unlearning tasks, such as those outlined in \"Model Sparsity Can Simplify Machine Unlearning.\" Aligning the evaluation metrics with these definitions would strengthen the validity of the results.\n \n - **Unlearning for Stable Diffusion (Table 2)**:\n - **Limited Performance Advantages**: Beyond demonstrating efficiency, the method does not show significant advantages in other performance metrics. This limitation raises questions about the overall effectiveness of SLUG in this context.\n \n - **Application to Vision-Language Models (VLMs)**:\n - **Lack of Reported Data**: Although the paper highlights the application of the unlearning method to VLMs and mentions corresponding evaluation metrics, it fails to report the actual data results. This absence undermines the persuasiveness of the claims regarding the method's effectiveness in VLMs."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please check the questions in the weaknesses above."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is well-organized and easy to follow. This approach achieves effective unlearning with just a single gradient update on one layer, demonstrating remarkable efficiency, particularly in the context of large models.\n\n2. In the proposed approach, the author employs the diagonal of the Fisher information matrix to approximate layer importance, thereby enhancing interpretability.\n\n3. The author conducted extensive experiments on large-scale multimodal models including CLIP, Stable Diffusion, and VLMs, demonstrating the wide applicability of the proposed approach and empirically demonstrating its advantages in balancing efficiency and model utility. And the author provided complete code that supports reproducibility."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The author proposes a method that only requires one-time gradient calculation to update a single layer of the model for achieving unlearning. By approximating the importance of the measurement layer using the diagonal of the Fisher information matrix and balancing gradient alignment, the author selects a single target layer and finally updates its parameters in a single step to achieve the desired outcome. The effectiveness of this approach is validated through extensive experiments."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The proposed scheme only updates the most important layer to achieve excellent forgetting effects. Although the experimental results can provide an empirical guarantee for forgetting, intuitively there must be residual information in the remaining layers. From the experimental results, the difference in importance between layers is not large. Hence, it feels more reasonable to update as many layers as possible while maintaining model performance. It is better to add more discussions.\n2. The design of the approach requires access to all forgotten and retained data. However, the targeted domain involves relatively large datasets, requiring substantial storage space. If complete access to the data is not feasible, could this negatively impact the effectiveness of the scheme?\n3. The paper's description of layer selection is not clear enough, and I did not correspond the graph well with the Pareto optimal set. I cannot clearly understand how the author balances importance of layers and grade alignment.\n4. From the experimental results of unlearning for stable diffusion, it can be seen that unlearning leads to a slight decrease in the quality of image generation. \n5. The experiment of unlearning for VLM lacks quantitative analysis and only shows examples. Adding quantitative analysis will provide clearer evidence for the method."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "## Questions\n\n1. **Retain Set Curation:** Could the authors provide a detailed explanation of how the retain set is curated? Clarifying this process is essential for reproducibility and assessing the method's robustness.\n2. **Iterative Update Performance:** It is recommended to report the performance of the iterative update version of SLUG. If performance metrics decline, this could highlight underlying foundational issues that need to be addressed."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "## Strengths\n\n1. **Balanced Unlearning and Performance:** The proposed method effectively balances the unlearning process with the model's general performance, addressing a crucial trade-off in model management.\n2. **Computational Efficiency of SLUG:** SLUG requires gradient computation only once, offering two significant advantages:\n - **Faster Computation:** Reduces overall computation time.\n - **Prevention of Over-Unlearning:** Minimizes the risk of excessively removing learned information.\n3. **Generalization Across Models:** SLUG demonstrates effectiveness not only on stable diffusion models but also yields promising results on Vision-Language Models (VLMs), showcasing its potential for broader applicability."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a method called Single Layer Unlearning Gradient (SLUG), aimed at addressing the challenges of unauthorized generation of privacy-related and copyright-infringing contents. SLUG is designed for unlearning of targeted information from trained machine learning models, requiring only a single gradient computation (and then reuse it) and updating only one layer of the model. This approach minimizes computational costs and maintains the model’s overall utility, particularly for unrelated tasks.\nThe method has been tested with popular models like CLIP and Stable Diffusion, demonstrating superior efficiency and effectiveness compared to existing methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "## Weaknesses\n\n1. **Dependence on Retain Set:** SLUG relies on a retain set to preserve general performance. The methodology for curating this set is critical, yet the paper lacks sufficient discussion or guidelines to ensure reproducibility.\n2. **Incomplete Computational Time Analysis:** While Table 1 presents a computation time comparison, the analysis based on $O(N_f + N_r)$ overlooks key factors:\n - **Iterative Parameter Updates:** SLUG requires iterative updates of model parameters as described in Equation 9.\n - **Layer Importance and Gradient Alignment:** The time associated with determining layer importance and performing gradient alignment is not accounted for, potentially underestimating the actual computational cost.\n3. **Insufficient Evaluation on VLMs:** The claims regarding SLUG's performance on VLMs are not fully substantiated. More comprehensive experiments are necessary to convincingly demonstrate its superiority in this context."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose an efficient unlearning method for targeted information removal from multi-modal foundation models using single layer update with one-time gradient calculation."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024targeted,\ntitle={Targeted Unlearning via Single Layer Unlearning Gradient},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3p4raemLAH},\nnote={under review}\n}"
},
"abstract": {
"value": "The unauthorized generation of privacy-related and copyright-infringing content using generative-AI is becoming a significant concern for society, raising ethical, legal, and privacy issues that demand urgent attention. Recently, machine unlearning techniques have arisen that attempt to eliminate the influence of sensitive content used during model training, but they often require extensive updates in the model, reduce the utility of the models for unrelated content, and/or incur substantial computational costs. In this work, we propose a novel and efficient method called Single Layer Unlearning Gradient (SLUG), that can unlearn targeted information by updating a single targeted layer of a model using a one-time gradient computation. We introduce two metrics: layer importance and gradient alignment, to identify the appropriate layers for unlearning targeted information. Our method is highly modular and enables selective removal of multiple concepts from the generated outputs of widely used foundation models (e.g., CLIP), generative models (e.g., Stable Diffusion) and Vision-Language models. Our method shows effectiveness on a broad spectrum of concepts ranging from concrete (e.g., celebrity name, intellectual property figure, and object) to abstract (e.g., novel concept and artistic style). Our method also exhibits state-of-the-art efficiency with effective unlearning and retention on the comprehensive benchmark UnlearnCanvas. Our code is available at https://anonymous.4open.science/r/SLUG-6CDF"
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Machine unlearning",
"multi-modality",
"CLIP",
"vision-language model (VLM)",
"stable diffusion",
"privacy protection",
"copyright protection",
"trustworthy and safe machine learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a36a86b5db4a3df168395e176bcecc3c7f1035c3.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Targeted Unlearning via Single Layer Unlearning Gradient"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3qDB9j6p3S | Labeled TrustSet Guided: Combining Batch Active Learning with Reinforcement Learning | main | Active | Active learning;Reinforcement Learning | other topics in machine learning (i.e., none of the above) | 3;3;5;6 | 5;3;3;4 | 2;2;2;3 | 1;1;3;3 | 2;1;2;3 | 4.25 | 3.75 | 2.25 | 2 | 2 | -0.174078 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the Weaknesses section."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The BRAL-T framework is quite effective considering good performance on some standard datasets. Overall, the paper is easy to follow and well explained."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a data selection method named as TrustSet which takes into account uncertainty, diversity and class distribution. TrustSet in combination with reinforcement learning based sampling policy introduced BRAL-T framework which extends the benefits of TrustSet to select meaningful data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "It’s not clear why partcularly GradNd score is used for TrustSet extraction in case of large datasets and complex models. GranND scores appear to be highly sensitive towards small changes in model parameters and may also add high computational overhead.\n\nPerhaps large datasets such as ImageNet could be used to test the performance and time complexity of the proposed method in an image classification setup.\n\nIt’d be interesting to understand the motivation to select negative Wasserstein distance as reward function since it’s considerably computationally expensive and might turn out to be challenging in high-dimensional spaces. It’d also be useful to compare the proposed framework to a few recent baselines as well."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* I find some of the experimental settings to be confusing. As stated in Appendix B, the active learning experiments involved querying nearly the entire unlabeled datasets—for instance, 5,000 out of 5,436 for BreakHis, 5,000 out of 5,132 for PneumoniaMNIST, and 4,000 out of 4,695 for Waterbird. This large query budget effectively undermines the purpose of active learning. I understand these settings follow [1], but I would appreciate it if the authors could explain their rationale for adopting such an approach.\n\n* In my humble opinion, Figure 1 seems to be a generic framework of active learning with RL, and does not provide much insight for the proposed BRAL-T method. I would suggest the author add more information to Figure 1 or merge it with Figure 2.\n\n* I would kindly suggest the authors double-check their citations. For example, CoreSet is referred to [2] instead of the original paper, and both BADGE and KMeans refer to the same paper [3].\n\n\n[2] Zhan, Xueying, et al. \"A comparative survey of deep active learning.\" arXiv preprint arXiv:2203.13450 (2022).\n[3] Ash, Jordan T., et al. \"Deep batch active learning by diverse, uncertain gradient lower bounds.\" arXiv preprint arXiv:1906.03671 (2019)."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The paper introduces a novel TrustSet approach designed to balance class distribution and mitigate the long-tail problem present in CoreSet. In addition, incorporating reinforcement learning to extend the properties of TrustSet, based on labeled data, to unlabeled data is a promising direction in active learning.\n\n* The authors perform comprehensive experiments across eight active learning benchmarks and various long-tailed active learning / fine-tuning tasks. The authors also perform rigorous baseline comparisons and ablation studies, demonstrating that each component of the framework contributes meaningfully to performance improvements. \n\n* The presentation is overall well-organized, with detailed figures and algorithms that illustrate each component of the proposed methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents Batch Reinforcement Active Learning with TrustSet (BRAL-T) framework, which combines batch active learning (BAL) and reinforcement learning (RL). This method introduces TrustSet, which selects a balanced subset of labeled data that improves model performance, especially for data with long-tail distribution. To adapt TrustSet for unlabeled data, BRAL-T uses a Reinforcement Learning (RL) sampling policy trained on the labeled set to select unlabeled samples approximating the qualities of TrustSet data. The authors validate the performance of BRAL-T across several image classification and fine-tuning benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* To my understanding, TrustSet is a pruning/selecting strategy for **labeled data**, and the benefits highlighted in the abstract \"selects the most informative data from the labeled dataset, ensuring a balanced class distribution to mitigate the long-tail problem\" and \"optimizes the model’s performance by pruning redundant data and using label information to refine the selection process\" (L018-L022) both apply to the **labeled data**. However, the authors do not provide theoretical proof or experimental results to substantiate these claims regarding the **labeled data**. \n I acknowledge the authors provide some discussion about BRAL-DiffSet in Section 5.4. However, in my personal opinion, this ablation study validates the effectiveness of EL2N score on active learning, rather than the effectiveness of TrustSet on the selection of labeled data. Given that TrustSet is a key contribution of this work, I believe it is crucial to validate its effectiveness through empirical evidence or theoretical analysis. \n\n* Setting the number of candidate actions $A_c$ to a fixed number might be an issue, particularly for imbalance/long-tail datasets. In particular, if the clustering divided the unlabeled $U$ into $C$ as expected with each cluster primarily containing samples from the same class, the resulting $c$ clusters would be highly imbalanced. In such case, setting $A_c$ to a fixed number will make the sub-clusters, i.e., the candidate actions, $U_c^a$ to be imbalanced as well across $c$ clusters, which seems to contradict the objective of achieving balanced data selection. Can the authors elaborate on why choosing a fixed value for $A_c$?\n\n* It seems that the experiments do not contain any RL-based active learning baselines. Could the authors elaborate on why TAILOR [1] is not included, as it also aims to find class-balanced examples in active learning by incorporating RL?\n\n* Some presentation issues. For example, most citations in Section 2 and Section 5 are in double brackets; The legends in Figure 4 are barely readable; The captions are above tables in the main manuscript but are below tables in the Appendix; the baseline methods WAAL, LossPrediction, and SIMILAR are not discussed in Related Works."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Please see “weaknesses” part."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Defining state space with $L$ and $U$ is novel.\n2. Extensive experiments on long-tail datasets are interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes the Batch Reinforcement Active Learning with TrustSet (BRAL-T) ensuring a class-balanced sampling for long-tail problem. By introducing RL in batch active learning scenario, TrustSet selects high-quality samples from the unlabeled data. Extensive experiments demonstrate the effectiveness of the proposed BRAL-T."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Understanding and reading research papers is quite challenging. The notation should be well-defined in Section 3.\n2. TrustSet naively incorporates the concepts of class-balanced and curriculum learning into the existing methods such as GradNd and Super Loss.\n3. There are cases where a subset $S$ is subsampled from $L$. I’m curious about the intuition behind this approach and why the entire set $L$ isn’t used."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "No"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. It is a good trial to adopt the concept of reinforcement learning to help address batch active learning.\n2. This paper is structured, and well-written."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed an RL-based method to address a classical setting, i.e., active learning. Specifically, this paper designed the reward function, state space, and action space to help the sample query stage. The experimental results were conducted among 5 benchmarks, validating its effectiveness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Insufficient Experimental Support on the Design of Curriculum Learning Mechanism When learning the TrustSet, this paper adopts some concepts from curriculum learning, i.e., super loss, to help address the uncertainty of the selected samples, which stands strongly on an argument \"...Data with high GradNd scores tend to be difficult and uncertain samples...\", and this paper uses the results in Figure 3 to claim the rightness of this conjecture. However, compare to the version in CVPR’24, the authors delete the pilot experimental parts, (Figure 3) in original version, which further weakens the claim since no pilot experiment is built. Most importantly, we are not clear whether this issue only occurred for the GradNd-based methods since only this one is shown (What if other methods, like uncertainty score, BatchBald, GrandMatching, Submodular,..., have no such issue?), which I believe needs more comparing methods to help support such motivation to use the SuperLoss.\n\n2. Unclear performance bound about the TrustSet. The aim of this paper is to construct and optimize the selected samples, treating them as the best suited during each query. This also suggests that using all samples could be the best. Therefore, there should be a theoretical performance bound about the proposed RL method, which aims to approximate the optimal \"TrustSet\". Besides, does TrustSet have to be built by GraNd? Why not other methods? This paper needs to make a clear illustration for that.\nThe whole design of RL framework is built for the cluster-wise groups instead of each sample, which is obviously different from those traditional methods focusing on the sample score. It also means that the proposed RL method is only applicable to the group-level selection, which, intuitively, may not be effective when the number of samples in the unlablled pool is small (since each sample counts).\n\n3. I do not see clear novelty and value of such RL-based design. As this paper claims in Related Work about \"Active Learning with RL\", the deficiency of some methods using accuracy metrics are \" ...the relationship between the target model’s predictions and the training set is complex, making it difficult to train the RL policy...\" However, the reward function in the proposed framework is not based on an intuitive evaluation criteria but a TrustSet, which is not as measurable as the former. So how such a design could be more \"simple\" than those criteria-based methods to train the RL policy? Besides, the most important of RL is the design of the reward function, but I cannot see the clear value and novelty of designing such this reward function.\n\n4. Based on the results in Figure 4, this method works not as promising as expected. Besides, where is the results of Tiny-ImageNet with progressive data volume?not 2%-3%\n\n5. The time complexity analysis of this RL-based method lacks comparison to other methods, and the experimental results about that shall be presented (such as training time, FLOPS...)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024labeled,\ntitle={Labeled TrustSet Guided: Combining Batch Active Learning with Reinforcement Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3qDB9j6p3S},\nnote={under review}\n}"
},
"abstract": {
"value": "Batch active learning (BAL) is a crucial technique for reducing labeling costs and improving data efficiency in training large-scale deep learning models. Traditional BAL methods often rely on metrics like Mahalanobis Distance to balance uncertainty and diversity when selecting data for annotation. However, these methods predominantly focus on the distribution of unlabeled data and fail to leverage feedback from labeled data or the model’s performance. To address these limitations, we introduce TrustSet, a novel approach that selects the most informative data from the labeled dataset, ensuring a balanced class distribution to mitigate the long-tail problem. Unlike CoreSet, which focuses on maintaining the overall data distribution, TrustSet optimizes the model’s performance by pruning redundant data and using label information to refine the selection process. To extend the benefits of TrustSet to the unlabeled pool, we propose a reinforcement learning (RL)-based sampling policy that approximates the selection of high-quality TrustSet candidates from the unlabeled data. Combining TrustSet and RL, we introduce the **B**atch **R**einforcement **A**ctive **L**earning with **T**rustSet (**BRAL-T**) framework. BRAL-T achieves state-of-the-art results across 10 image classification benchmarks and 2 active fine-tuning tasks, demonstrating its effectiveness and efficiency in various domains."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Active learning",
"Reinforcement Learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/085f25b2456a6d17e9cb4e08dd04d2f395ef88b5.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Labeled TrustSet Guided: Combining Batch Active Learning with Reinforcement Learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3qDhqj6qfu | TabKANet: Tabular Data Modeling with Kolmogorov-Arnold Network and Transformer | main | Active | Tabular Data Modeling; Kolmogorov-Arnold Network; Numerical Feature Embedding | foundation or frontier models, including LLMs | 3;3;3 | 5;3;4 | 2;2;1 | 2;2;1 | 2;2;2 | 3 | 4 | 1.666667 | 1.666667 | 2 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- What about KANs makes them a compelling choice for this application in particular: encoding numeric tabular data to pass into a downstream transformer? E.g., what theoretical properties are especially relevant for this application? Why not use them in other parts of the model?\n\n- Did you evaluate other numeric encoders within the same framework as your model, such as linear, MLP, and/or piecewise linear encoders?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "- Exploring the potential uses of KANs in tabular modelling is a relevant current topic.\n- The model shows some promise in outperforming other NN models.\n- Code is provided to reproduce some of the results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a discriminative model for tabular data that is distinct from previous models in that it uses a KAN for embedding numeric features rather than linear or MLP layers. Its performance is compared to a selection of GBDT and NN tabular models. In the experiment settings evaluated, TabKANet consistently outperforms the other NN models but is mostly outperformed by the GBDT models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "On the whole, I think paper has a poor contribution due to issues with the evaluations and a lack of depth in justifying the proposed modelling choices. The overall novelty of the method is limited, being a relatively simple combination of existing modelling components (KANs, batch normalization, and transformers), and I don't think the rest of the paper has the depth or soundness to make up for that.\n\n- No hyperparameter tuning is done for baseline models. This is especially problematic for GBDT models, which require hyperparameter tuning for a practical comparison. This is out of line with prior research (e.g., see Gorishniy et al. 2021) and severely limits the utility of the comparison with GBDTs.\n\n- The NN models being compared to are not state-of-the-art, so the comparison to them does not indicate much about the paper's contribution. I would have at least liked to have seen TabR and TabPFN included, and other recent models that are widely used as baselines such as FT-Transformer and MLP-PLR would have been welcome.\n\n- The discussion of why KANs should be used in this area is lacking in depth. The paper doesn't provide theoretical or empirical insights into how they would be useful for tabular data in particular. Instead, there are just vague references to their flexibility. Ablations are also not provided to compare the contribution of the KAN part alone versus other encoders.\n\n- Using batch normalization instead of layer normalization is a fairly trivial tuning choice, and the discussion on page 5 is unclear and does not provide significant technical insights to justify treating it as an important decision.\n\n- Parts of the paper contain misleading claims:\n\t- The introduction indicates that existing attempts to use transformers in tabular modelling are using them to encode categorical variables, when in fact there's a much wider variety of transformer models for tabular data (some of which are given in the Related Work). In general, the proposed model is not contextualized with respect to the entire range of existing tabular transformer models.\n\t- The introduction also claims that the proposed model achieved \"identical performance\" to GBDTs on almost all datasets - the performance was not identical, and was lower on average in most cases (evaluation issues notwithstanding).\n\t- \"Current scientific research has not yet proposed a simple, stable, and universal numerical embedding module\" - this is a very strong claim that is not justified. MLPs, linear layers, and piecewise linear encodings arguably satisfy these criteria just as well as the proposed solution."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* How were the datasets for the evaluation selected?\n* How where hyper-parameters tuned for all models?\n* How well does the model perform when removing the KAN layer?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper discusses encoding of continuous values for tabular classification. This is a hot topic, and the combination with the recently proposed KAN architecture is timely.\nThe model is evaluated on a wide variety of tasks and the datasets are discussed in detail."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a model for tabular data based on the Transformer architecture, similar to TabTransformer, but including a KAN layer for encoding continuous features. The model is evaluated on binary and multi-class classification tasks as well as regression and is found to outperform deep learning baselines and in some cases GBRT models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "# Main concern\nGiven that the novelty of the method is relatively small, the empirical evaluation of the method is critically important. However I have several concerns regarding this:\na) It's unclear how the datasets for evaluation were selected. There is many established benchmark suites, such as the AutoML benchmark, TabZilla, OpenML-CC18 and the Grinsztajn collection. Not adhering to a standard benchmark suite allows for cherry-picking of dataset.\n\nb) Since the emphasis of the paper is on the encoding of continuous features, I think FTTransformer, and Gorishniy et al: On Embeddings for Numerical Features in Tabular Deep Learning are critical baselines to compare against.\n\nc) Some important ablations are missing; in particular, what happens if the KAN layer gets replaced by simple input scaling? This seems to be different form TabTransformer, which skips the transformer entirely for continuous data. Also, Table8 shows a big improvement between the LN and BN versions. An obvious comparison here would be to TabTransformer with BN.\n \n\nd) The paper doesn't describe how hyper-parameters for the methods were tuned.\n\ne) Some deep baselines are missing. While it's not reasonable to compare against all publications, I would suggest comparing against TabR and TabPFN (even when subsampled to a maximum of 3000 samples, TabPFN performance is still strong, see McElfresh.\n\n\nI think the clarity of the paper could also be improved, and the novelty of the KAN is overstated, in particular wrt existing neural networks and \"On Embeddings for Numerical Features\".\n\n# Other Concerns\n## Overselling KAN\nThe paper states in several places that KANs are more powerful than MLPs; however, that is not strictly the case. A KAN can easily be represented with an MLP by constraining the MLP structure and potentially changing the activation function. For example a KAN with piecewise linear splines with r pieces is equivalent to an MLP with ReLU activations where each node is replaced by a small neural network with r nodes. This view calls into question claims like line 63 \"This feature offers neural networks\n more flexible performance compared to Multilayer Perceptron\" and Line 115 \"rigidity in MLP\".\nAlso, neural networks with spline activation functions have long been studied, and it's unclear what (if any) novelty can be attributed to KANs.\n\n## Minor suggestions\nLine 028 \"ordered different features\" is hard to read and not very clear.\n\nLine 028: add citations for most commonly used and oldest business data format. I am not convinced by these claims. A lot of data is actually in spreadsheets and relational databases, neither of which are tabular data in the sense of the standard ML datasets. NoSQL data is also extremely common.\n\nLine 036: Citing Hollman for the prevalence of GRBT is strange, since TabPFN is a deep model that clearly outperforms GRBT.\n\nLine 043: None of the three arguments for neural networks seems sound, and I am a firm believer in using neural networks on tabular data. 1) is vague, 2) it is unclear what is meant by scalability and how it relates to multimodality 3) unsupervised schemes exist for tree-based models, though they are not as common.\n\nLine 070: It's unclear what is meant by \"business structure framework\"\n\nLine 215 1): It's unclear wrt to what baselines you are discussing improvements. Both LN and BN are common in transformer models. Is this wrt KAN or wrt TabTransformer?\n\nLine 218 3): This is just concatenating features, right?\n\nFigure 2: The meaning of d is unclear, it seems to be embedding size. However, it's unclear why the output would be reshaped to (m+n) * d? Is this just to input into the MLP? Also, the figure shows multiple rows in input and output, but the model operates on one row at a time, right?\n\nLine 223: It's unclear what's meant by \"subconscious best solution\".\n\nLine 377: \"predict auc scores\" I think you mean predict the target and compute AUC scores?\n\n## Typos\n\nLine 027 \"The tabular\" -> \"a tabular\"\n\nLine 030 \"medicineI\" ->\"medicine\"\n\nLine 053 \"they have used Transformers\" -> Transformers have been used.\n\nFigure 1: \"Nurmerical\" -> \"Numerical\"\n\nLine 212: Numercal -> \"Numerical\"\n\nAcknowledgements contain author instructions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- General question. Why are numerical and categorical data processed separately? Would it make sense to explore a combined representation that integrates both types of features?\n\n- Line 221: The statement *\"Firstly, normalization for numerical items is crucial, which is essential to avoid gradient explosion, especially with real-world data.\"* raises a question: Why not simply normalize the numerical data before feeding it to the neural network? Using batch normalization introduces additional learnable parameters and can significantly affect training, as the normalization depends on batch size. Although this question is somewhat addressed later (Line 238), it introduces another point of confusion: *\"Repeatedly pairing numerical normalization results and category features will bring additional training data.\"* This may be more accurately described as data augmentation rather than additional data, which could be explored through an ablation study.\n\n- Line 223: Could you clarify the phrase *\"This is a subconscious best solution\"*? The entire sentence is somewhat confusing and may need rephrasing for clarity.\n\n- Line 255: The sentence about data splitting is unclear. Did you apply cross-validation, or was the data split into three groups: training, testing, and validation?\n\n- Tables 2 and 3: What is the motivation for separating the neural network and machine learning baselines? A unified comparison would improve clarity.\n\nMinor Comments\n\n- In the motivation, you mention that self-supervised learning is a major strength of deep learning approaches. How could TabKANet be adapted for use in self-supervised settings?\n\n- Section 3: The statement \"As mentioned in Sec. 2.1, GBDTs outperform NNs in table modeling tasks because of the skewed or heavy-tailed features in table information\" reads as a hypothesis rather than a definitive fact. Could this be clarified?\n\n- Line 267: *\"MLP is a traditional deep learning model consisting of multiple layers of neurons.\"* How are you defining a \"traditional\" deep learning model here? A more specific description might be helpful."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper addressing an important issue of the structured tabular learning \n- Authors utilize various tabular datasets from different domains"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces TabKANet, a model that integrates Kolmogorov-Arnold Network (KAN) and Transformer architectures to improve the handling of structured tabular data. The authors demonstrate that TabKANet outperforms selected deep learning baselines in various tasks. However, its performance gains over decision tree-based ensemble methods, such as Gradient Boosted Decision Trees, are minimal."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The explanation of KAN and the model’s overall structure is brief and lacks clarity. The paper’s organization could be improved to aid readers in understanding the background and methodology.\n\n- The proposed method, TabKANet, offers only marginal improvements over existing models on key metrics for several datasets. For instance, on 2 out of 6 datasets, the AUC improvement compared to CatBoost is very small, with a difference of only **0.003**. This casts doubt on the claim from the abstract that *\"Its performance is comparable to or surpasses that of Gradient Boosted Decision Tree models (GBDTs).\"* Overall, such minor performance gains may not justify the added complexity of the approach.\n\n- The paper omits several recently proposed methods for deep tabular learning, such as TabPFN, GANDALF, DeepTLF, and more in [1]. Including these baselines would provide a more comprehensive comparison and better contextualize the performance of TabKANet. \n\n- The evaluation of TabKANet could be strengthened by using widely recognized benchmarks, such as Tabzilla, which is well-accepted by the community and would provide a more robust assessment of the model's effectiveness.\n\n\n[1] Borisov, Vadim, Tobias Leemann, Kathrin Seßler, Johannes Haug, Martin Pawelczyk, and Gjergji Kasneci. \"Deep neural networks and tabular data: A survey.\" IEEE transactions on neural networks and learning systems (2022)."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce TabKANet, a novel model that leverages a KAN-based Numerical embedding module and Transformer to overcome neural networks' limitations in tabular data. It achieves performance comparable to or exceeding GBDT models on various datasets."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024tabkanet,\ntitle={Tab{KAN}et: Tabular Data Modeling with Kolmogorov-Arnold Network and Transformer},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3qDhqj6qfu},\nnote={under review}\n}"
},
"abstract": {
"value": "Tabular data is the most common type of data in real-life scenarios. In this study, we propose the TabKANet model for tabular data modeling, which targets the bottlenecks in learning from numerical content. We constructed a Kolmogorov-Arnold Network (KAN) based Numerical Embedding Module and unified numerical and categorical features encoding within a Transformer architecture. TabKANet has demonstrated stable and significantly superior performance compared to Neural Networks (NNs) across multiple public datasets in binary classification, multi-class classification, and regression tasks. Its performance is comparable to or surpasses that of Gradient Boosted Decision Tree models (GBDTs). Our code is publicly available on GitHub: https://github.com/AI-thpremed/TabKANet."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Tabular Data Modeling; Kolmogorov-Arnold Network; Numerical Feature Embedding"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/5f3beb5443afd972cd5e1ed1d6a376afad0fc0d8.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "TabKANet: Tabular Data Modeling with Kolmogorov-Arnold Network and Transformer"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3qeOy7HwUT | Input Space Mode Connectivity in Deep Neural Networks | main | Active | mode connectivity;input space;deep learning;adversarial detection;interpretability;percolation theory | interpretability and explainable AI | 5;5;6 | 3;4;3 | 3;3;3 | 3;2;2 | 2;2;2 | 5.333333 | 3.333333 | 3 | 2.333333 | 2 | -0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Can we repeat the produce of finding A->B'-C for each segment recursively to obtain A->E'->D'->B'->F'->G'-C (maybe a longer path), such that there is no essential barrier existing?\n2. How can we use the input mode connectivity to give a picture of the decision boundary of DNNs?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This topic is interesting. Investigating the model connectivity in the input space could help us to shape the decision boundary of DNNs.\n2. The insight of mode connectivity is indeed an intrinsic property of high-dimensional geometry is important, as it might be able to explain various phenomena lied in the field of model connectivity, such as wide neural networks are easier to satisfy mode connectivity after accounting for permutation invariance.\n3. The potential application towards adversarial examples is insightful, which might explain the existence of adversarial examples."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors identify an interesting phenomenon, namely input mode connectivity, where samples with similar predictions could be approximately linearly interpolated such that the interpolated sample remains a low loss."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The major issue of this work is that the investigation is not in-depth enough. For example, in Fig. 3, the path A->B'->C and A->B->C look similar but they differ significantly in terms of mode connectivity. \n - How should we quantify such differences? or why the small difference B'-B (as shown in right bottom of Fig. 1) is significant in terms of model connectivity? \n - Here is another example, in the adversarial example part, why real-adversarial pair shows a large barrier than real-real pair? An intuitive explanation is at least expected.\n\nThese are all important questions and represent the motivation why we are interested in investigating the mode connectivity in input space.\n\n2. Their theory cannot explain the phenomenon they discovered. Their conjecture is only able to explain the mode connectivity for untrained NNs with infinity large input dimension. However, in their experiments, two real images with similar predictions are usually not connected unless another intermediate point is found, say B'. Clearly, their theory cannot explain the realistic scenarios."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Do you have any intuition whether results would fundamentally change if other architectures were considered? E.g. [1] find that mode connectivity in the parameter-sense is influenced by the choice of architecture, i.e. there is different behaviour for vision transformers or multi-layer perceptrons.\n2. I’m a bit confused regarding the adversarial example detector; Are you comparing loss barriers after a single iteration of your algorithm in both cases, or after two iterations in case of the adversarial setting? I thought that in both cases the barriers became very small? \n\n[1] Disentangling Linear Mode-Connectivity, Altintas et al., 2023"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The role of the input when it comes to loss behaviour is somewhat understudied and the authors develop new ideas in this direction while keeping things very analagous to the results observed for parameter loss landscapes.\n2. The authors give further credibility to their results by mathematically proving them in an idealized setting assuming independence. While this is not realistic, I do find the argument of the authors convincing that correlations in this case will most likely help connectivity."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies input space mode connectivity, where instead of studying the loss surface as a function of the parameters, the authors consider varying the inputs (for a fixed set of parameters). More specifically, the authors investigate paths connecting two sets of inputs and the resulting behaviour of the loss, in complete duality to the well-known mode connectivity in parameter space. Several choices for “modes” in this context are explored: (1) Validation data points that achieve very low loss, (2) a validation data point and an adversarial example optimized towards the same class but based on a datapoint of different class and (3) synthetic “optimal\" points optimized to maximise a given logit. For all these cases, the authors show how simple piecewise linear paths suffice to connect such points, while limiting the barrier to very small values. Adversarial examples exhibit larger barriers in a significant manner, allowing a detector to leverage this difference to classify whether an image is adversarial or not."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The biggest weakness is the lack of motivation presented in this paper for input space connectivity. Why is this an interesting quantity to study? In case of the parameter loss landscape where this notion originated, the motivation was from an optimisation point of view; is SGD attracted to a convex region of the parameter space? Does it find isolated minima or are there entire regions of low loss? There might be good motivations for input space connectivity as well (I’m not an expert in this area) but the paper does not do a good job at presenting it in its current form. In general I also have no intuition whether it is surprising that real-real images can be connected with two segments or not, etc. \n\n2. I like the idea of using the difference in barrier between real-real and real-adversarial inputs, but as usual in adversarial robustness, I think that new threat models need to be investigated when taking this idea into account. I.e. can one now develop adversarial examples that are designed to mimick the barrier of real examples, thus fooling the new classifier? I don’t expect the authors to necessarily develop such an algorithm but this possibility should at least be discussed in the paper.\n\n3. I also have a hard time interpreting the adversarial example results. It is not that surprising that it requires more segments to connect things properly compared to the real-real scenario (the image is still very different after all). How does this compare to simply two images coming from different classes and their barriers in between? \n\n4. The writing of the paper is not very satisfying. The notation is rather sloppy (e.g. \"0.1*MSE for image deviation and 1e-7*high-frequency penalty\"), optimization details are listed without defining them (e.g. high-frequency penalty). The actual algorithm to obtain a piece-wise linear curve is never properly defined."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- I don't understand the algorithm presented in Section 4.2.1. In my understanding, the claim is that if one of the endpoints is from an adversarial attack, then the method of optimizing $B$ to find a connected (piecewise linear) path will not work. However, the algorithm of detecting adversarial attack is to use the interpolation loss curve and logits as the input and do a classification. I don't see how this proposed algorithm is related to the finding.\n- The wording of Section 4.3 is too confusing that I totally can not understand. What does \"natural datasets for untrained models\" mean? What does \"starting from Gaussian noise\" mean (start from it and do what?) What does \"high-frequency penalization\" mean?\n- In Conjecture 5.1, why does the condition has \"for any probability $0 < p < 1$\", but $p$ is never used in the statement? What does \"almost always connected\" mean?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- This phenomenon is interesting and potentially valuable for understanding the behaviour of deep neural networks and the geometry of high-dimensional spaces.\n- This finding has a practical useness that it can be used to detect adversarial attacks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents an interesting phenomenon, that given two input points $A,C$ which are classified as the same class by the model (i.e. have similar output), there exists a point $B'$, such that the linear interpolation between $A,B'$ and $B',C$ are all classifed as the same class. The authors refer to this phenomenon as \"input space mode connectivity\", despite they are not actually linear connected. The authors also presented a analysis under a very strong and unrealistic condition, conjecturing that this phenomenon is intrinsic to high-dimensional spaces."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The presentation is kind of confusing. See Questions for concrete issues.\n- In my understanding, the finding in this paper is not really linear connectivity as the path found by the proposed method is actually a piecewise lienear path with 2 pieces. This makes the title and introduction kind of misleading. \n- The experiments are only on a few image classification tasks and models. It is not clear if this phenomenon is general enough. \n- It seems that the thoretical explanation presented in Section 5.2 (even if we omit the too strong assumption of randomized labeling of each grid) only explains why there can exist a connected path, but does not guarantee that the path is linear, which is inconsistent with Conjecture 5.1.\n- In Conjecture 5.1, why do you need to assume \"a subset of input space $X' \\subseteq X$\"? Does the conjecture hold even if $X'$ itself is unconnected? Moreover, there is a statement \"two random inputs $x_0, x_1 \\in X'$\". What is the distribution of the randomness? If it is uniform, then there must be extra constraints on $X'$ (such as compactness), since not every subset of a Euclidean space has a uniform distribution. (For example, you can not draw two points uniformly from a 2d plane)."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We extend the concept of loss landscape mode connectivity to the input space of deep neural networks."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024input,\ntitle={Input Space Mode Connectivity in Deep Neural Networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3qeOy7HwUT},\nnote={under review}\n}"
},
"abstract": {
"value": "We extend the concept of loss landscape mode connectivity to the input space of deep neural networks. Mode connectivity was originally studied within parameter space, where it describes the existence of low-loss paths between different solutions (loss minimizers) obtained through gradient descent. We present theoretical and empirical evidence of its presence in the input space of deep networks, thereby highlighting the broader nature of the phenomenon. We observe that different input images with similar predictions are generally connected, and for trained models, the path tends to be simple, with only a small deviation from being a linear path. Our methodology utilizes real, interpolated, and synthetic inputs created using the input optimization technique for feature visualization. We conjecture that input space mode connectivity in high-dimensional spaces is a geometric effect that takes place even in untrained models and can be explained through percolation theory. We exploit mode connectivity to obtain new insights about adversarial examples and demonstrate its potential for adversarial detection. Additionally, we discuss applications for the interpretability of deep networks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"mode connectivity",
"input space",
"deep learning",
"adversarial detection",
"interpretability",
"percolation theory"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b43c68644eb8bc147d6ba96272238c7c7a370d0d.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Input Space Mode Connectivity in Deep Neural Networks"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3rSeDrPj4B | AsymDreamer: Safe Reinforcement Learning From Pixels with Privileged World Models | main | Active | Safe Reinforcement Learing; World Model | reinforcement learning | 5;5;5;6 | 3;4;4;2 | 3;2;2;3 | 2;2;2;3 | 3;2;3;3 | 5.25 | 3.25 | 2.5 | 2.25 | 2.75 | -0.870388 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I am not familar with this field at all and I cannot evaluate the novelty of this work (especially over its predecessor / related works). It is an interesting read and I didn't identify significant issues. The paper is well motivated with solid analysis and experiments. My major concern is that this paper only evaluates on toyish environments and may not generalize well to real-world scenarios."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper is well-motivated and novel. It successfully extends the Dreamer framework to handle asymmetric information in RL, presenting a novel ACPOMDP framework. It shows the effectiveness of using privileged Information.\n- The results from Safety-Gymnasium benchmarks show that AsymDreamer outperforms baseline models in both task performance and safety metrics, especially in complex scenarios like 3D navigation. The thorough ablation studies are also conducted.\n- The paper includes rigorous theoretical analysis and compares ACPOMDP to standard CPOMDP."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents AsymDreamer, an approach based on the Dreamer framework that specializes in exploiting low-dimensional privileged information to build world models, thereby enhancing the prediction capability of critics. AsymDreamer employs the Lagrangian method to incorporate safety constraints. This paper formulates the proposed approach as an Asymmetric CPOMDPs (ACPOMDPs) framework. Experiments on the Safety-Gymnasium benchmark demonstrate that AsymDreamer outperforms existing approaches in both performance and safety."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I am not in this field so I am unable to evaluate the signficance of the proposed method and ACPOMDP problem/analysis.\n\nSome of my concerns include:\n- Over-reliance on privileged information: AsymDreamer’s performance relies on privileged information during training, which may not be available or hard to simualte in real-world environments. It potentially leads to bad performance or compromised safety in environments with limited or unavailable privileged information.\n\n- Extension to real-world environments: The experiments on Safety-Gymnasium benchmark are a bit toyish (even the most challenging 3D navigation one, although it may also be the cases for the related work). Additional testing in real-world environments would be beneficial to demonstrate the effectiveness."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Please clarify the similarities and differences between the proposed method and SafeDreamer[1].\n2. The reviewer is confused about what the global state in your privileged world model input is and where the privileged information (the low-dimensional vector from Appendix E) is used.\n\n[1] Huang, Weidong, et al. \"Safe dreamerv3: Safe reinforcement learning with world models.\" arXiv preprint arXiv:2307.07176 (2023)."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The theoretical analysis of ACPOMDPs in this paper is comprehensive. The authors propose the ACPOMDPs framework and provide a detailed analysis showing that asymmetric inputs reduce the number of critic updates and lead to a more optimal policy compared to the standard CPOMDPs framework.\n2. The authors introduce privileged information into the RSSM world model, enhancing the model's imaging capabilities and consequently improving the performance of the safe policy."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose AsymDreamer, a Dreamer-based safe reinforcement learning framework that utilizes low-dimensional privileged information to construct world models. The world model in AsymDreamer features two branches: the Privileged World Model, which takes a handcrafted low-dimensional vector as input, and the Observation World Model, which uses partially observed images (64x64 RGB). Additionally, the authors formulate their approach within the framework of Asymmetric CPOMDPs (ACPOMDPs) and integrate AsymDreamer with the Lagrangian method. Empirical results show that AsymDreamer outperforms existing safe reinforcement learning methods on the Safety-Gymnasium benchmark."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The methodology in this paper is too similar to prior work[1], and lacks sufficient novelty. In my view, the only difference is the inclusion of privileged information in the modeling of the RSSM world model.\n2. The authors are encouraged to include an ablation study to analyze the impact of adding privileged information to the observation input on the baseline algorithm.\n3. There are some typos in the paper that need to be corrected in the next version (e.g., line 400 and figure 2).\n\n[1] Huang, Weidong, et al. \"Safe dreamerv3: Safe reinforcement learning with world models.\" arXiv preprint arXiv:2307.07176 (2023)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1.\tI think in Figure 1, the encoder, decoder, and hidden state are different for the two world models. So, it would be clearer to use different colors of notations for them to avoid confusion.\n2.\tWhy not use SafeDreamer as a baseline?\n3.\tIn Figure 4, why do all baselines have constant cost? Don’t they vary across training steps? What is the target cost limit?\n4.\tWhat does the red solid line mean in Figure 4 and Figure 5?\n5.\tThere is no explanation of the baselines. What does OSRP mean? It would be better to include simple descriptions of each baselines in the appendix.\n6.\tThe ablation study results in Figure 5 raise a lot of questions. I think the authors also feel surprised that the privileged world model fails to train a viable cost predictor when taking privileged information as input. In addition, DreamerV3 is much better than AsymDreamer(S) in terms of the reward. This is also hard to interpret. I think the authors should do more experiments to explain these results to provide insights into the model. Otherwise, there seems no clear message that can be summarized in this paper."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.\tThe idea of using privileged information in the Dreamer structure is interesting. Separating observation modeling and task-centric prediction modeling avoids the potential trade-off between these two tasks. This also allows the observation world model to capture more detailed observation information, thus enabling the actor model to achieve better performance with richer input features.\n2.\tThe paper is generally well-written and well-organized. Figure 1 clearly shows the training pipeline."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on the safe RL problem that struggles with performance degradation and often fails to satisfy safety constraints. They attribute this problem to the lack of necessary information in partial observations and inadequate sample efficiency. Specifically, they exploit low-dimensional privileged information to build world models, thereby enhancing the prediction capability of critics. The authors propose Asymmetric Constrained Partially Observable Markov Decision Processes, a relaxed variant of CPOMDPs. The key distinction is that ACPOMDPs assume the availability of the underlying states when computing the long-term expected values. To ensure safety, they employ the Lagrangian method to incorporate safety constraints. The experiments conducted on the SafetyGymnasium benchmark demonstrate that the proposed approach outperforms existing approaches dramatically in terms of performance and safety."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\t**Justification of using privileged information**. It seems that the motivation for using privileged information is this sentence “Since training is often conducted in simulators, there is potential to leverage privileged information during training to reduce uncertainty from partial observations”. Does the proposed method have potential applications beyond simulations, such as in real-world scenarios? In addition, I am not sure if it is fair to compare with other methods that do not use privileged information.\n2.\t**Evaluation results**. The experimental evaluation is only conducted on 4 tasks, including one self-made task. The authors may need to include all the remaining tasks in the Safety Gymnasium. \n3.\t**Missing baseline on the same benchmark** Although this paper proposes a model-based method, I think it is still meaningful to compare with some well-known methods on the same benchmark. This website contains some results that can be used as reference: https://fsrl.readthedocs.io/en/latest/tutorials/benchmark.html. \n4.\tThere is no clear evidence to show the benefit of using privileged information. The results in Figure 5 need more investigation and explanation. Otherwise, it is hard to summarize the main conclusion of the proposed method."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "It would be great if the authors could address the weaknesses I outlined above."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* The proposed method of combining DreamerV3 with two world models and Lagrangian methods, which is novel and well-motivated.\n* The paper presents empirical results that compare AsymDreamer with many relevant model-based and model-free baselines in several tasks, achieving competitive performance on the Safety-Gymnasium benchmark.\n* The paper is mostly well-written."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this work, the authors address the challenge of exploiting asymmetric inputs under the CPOMDPs framework. They propose AsymDreamer, a new algorithm that uses privileged information to build a world model for the critic. They also introduce the ACPOMDPs framework, an extension of CPOMDPs allowing asymmetric inputs for the actor and critic. Theoretically, asymmetric inputs reduce critic updates and lead to a better policy. AsymDreamer constructs two world models, one for the actor based on historical information and another for the critic with privileged information. It is integrated with the Lagrangian method and shows competitive performance on the Safety-Gymnasium benchmark and strong adaptability to complex scenarios."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The work is built on top of SafeDreamer, incrementally adding another world model for critic, and differences in terms of the loss optimized are not sufficiently highlighted.\n* The DreamerV3 is basically combinations of existing methods like CPOMDPs, Augmented Lagrangian, and RSSM. While this is not a problem in itself as they are common in model-based reinforcement learning, there is very little discussion on why these particular methods were chosen and what possible alternatives from the literature exist and whether they might yield better results.\n* Lack of introduction to baseline algorithms. And in the PointGoal2 scenario, the performance results seem to be different from SafeDreamer reported in the paper.\n* Stability verification needs to be considered. How the hyperparameters in equation (8), (9) are selected, and certain ablation experiments need to be conducted.\n* Whether modeling two world models will lead to a significant increase in learning time and the effectiveness of model learning also need to be considered.\n* Privileged information and partial observations should be considered in more environments. A single scenario in QuadrotorGoal1 cannot be trusted to determine whether privileged information enhances performance."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024asymdreamer,\ntitle={AsymDreamer: Safe Reinforcement Learning From Pixels with Privileged World Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3rSeDrPj4B},\nnote={under review}\n}"
},
"abstract": {
"value": "Safe Reinforcement Learning from partial observations frequently struggles with rapid performance degradation and often fails to satisfy safety constraints. Upon deeper analysis, we attribute this problem to the lack of necessary information in partial observations and inadequate sample efficiency. World Models can help mitigate this issue, as they offer high sample efficiency and the capacity to memorize historical information. In this work, we introduce AsymDreamer, an approach based on the Dreamer framework that specializes in exploiting low-dimensional privileged information to build world models, thereby enhancing the prediction capability of critics. To ensure safety, we employ the Lagrangian method to incorporate safety constraints. Additionally, we formulate our approach as an Asymmetric CPOMDPs (ACPOMDPs) framework and analyze its superiority compared to the standard CPOMDP framework. Various experiments conducted on the Safety-Gymnasium benchmark demonstrate that our approach outperforms existing approaches dramatically in terms of performance and safety."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Safe Reinforcement Learing; World Model"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/996199ad2305ae0b92f476f04af6cb25003b37f0.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "AsymDreamer: Safe Reinforcement Learning From Pixels with Privileged World Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3rUAS7HCKE | Dark Miner: Defend against unsafe generation for text-to-image diffusion models | main | Active | Text-to-Image Diffusion Models;Unsafe Generation;Concept Erasure | alignment, fairness, safety, privacy, and societal considerations | 3;5;5;5 | 4;4;4;4 | 1;2;3;3 | 2;2;3;2 | 2;3;2;3 | 4.5 | 4 | 2.25 | 2.25 | 2.5 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "the first two step, i.e. the mining and verifying, can be directly applied to mine and search for text embedding that generates malicious contents."
},
"flag_for_ethics_review": {
"value": [
"Yes, Privacy, security and safety"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please see the weakness part."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1. This paper studies a critical area. The capability of these text2image models enables unauthorized usage of these models to generate harmful or disturbing content.\n2. Experiments exhibit good overall performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces Dark Miner for mitigating unsafe content generation in text-to-image diffusion models. Dark Miner involves a recurring three-stage process of mining, verifying, and circumventing. It's designed to iteratively identify and suppress unsafe content generation. Comprehensive experiments demonstrate the superior performance of Dark Miner."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The performance of Dark Miner is largely limited by the image pool related to c. That is to say, a lot of text that leads to concept c, which is not linked to the images in the pool, will not identified by the mining step.\n2. Questionable performance by CLIP in Section 3.2.2. Can you discuss how well CLIP performs this task, as this is critical to your methods, I believe this is an important part of the ablation study. \n3. Also, CLIP is used in both the model to be erased (i.e. SD) and Section 3.2.2 to identify the concept. However, texts that circumvent the defense will lead to images that cannot be identified by the CLIP.\n\n4. Methods are only evaluated on two SD models. Will this method generalize well beyond the SD families?\n5. Lack of ablation on the three steps of Dark Miner. For instance,\n * how well does the method perform with and without the verifying step?\n * Ablation over the size of the image pool\n * Ablation over different parameters and configurations of optimizing for the embeddings."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "What classes are used in NudeNet classifier and threshold?\nSLD and ESD original paper test their FID score on COCO-30k which is from COCO 2024 validation dataset, why authors didn’t use the same one?\nAccording to another relevant paper in the task of nudity elimination as following: Li, Xinfeng, et al. \"SafeGen: Mitigating Unsafe Content Generation in Text-to-Image Models.\" arXiv preprint arXiv:2404.06666 (2024). In the Table 3, they also give the FID score on COCO 2017 validation dataset, achieving 20.36 and 20.31 on ESD and their proposed method SafeGen, which is better than the author’s implementation. We suggest author should revisit their FID calculation on ESD on COCO 2017."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Authors point out the limitation of existing method and provide theoretical analysis.\nAuthors propose an iterative scheme to mine the text c that with maximum likelihood to generate unsafe content while previous method usually predefined such text c. \nAuthors propose a method to avoid overly concept removal by verifying the embedding before circumventing. Authors apply CLIP model to extract delta features from reference image and generated image, then a new metric based on cosine similarity of delta features.\nAuthors make effect to test proposed method against 4 different attacks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed Dark Miner to eliminate unsafe content generation of T2I models. It searches and verifies the embeddings which contain the unsafe concept and reduces unsafe generation by adjusting LoRA adapter of T2I models.\nThe paper mainly made following contributions:\n1.\tPoint out the previous methods fail to avoid unsafe generation on out-of-distribution prompts and easily being tricked by attacking methods.\n2.\tPropose Dark Miner which includes three stage to reduce the optimal embeddings related to unsafe concept and maintain generation ability on benign prompts.\n3.\tEvaluate efficient of proposed compared with 6 SOTA methods and conduct 4 SOTA attacks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "In section 3.2.2, authors use prompt “a photo” to generate reference image and compare the distance with target image generated by mined embedding. However, due to the randomness of reference image, the difference between reference image and target image is always high. In an extreme case, when there are benign images or non-explicit sexual related images in the image pool, the verifying and circumventing will be affected.\n \nAnd the prompt of target image has no relation with prompt of reference image. Another potential drawback is that, all the concepts from “unsafe” prompt are shifted away to random direction without guidance. Therefore, the image generated by “unsafe” prompt will not maintain any semantic information from its prompt even there are safe content expressed by “unsafe” prompt. The generation will be random which might degrade the utility of model.\n\nIn evaluation metrics, author use the mean classification score to evaluate inappropriateness by NudeNet, however, author didn’t mention what classes and threshold used in their implementation. And it is better if authors could also show the number of images being classified as inappropriate image instead of classification score.\n\nFor the CLIP score, the proposed method has relative low performance compared with other SOTA methods."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. The paper claims that existing methods fail to guarantee safe generation for unseen texts in the training phase. Does the proposed Dark Miner can provide such guarantee? If so please provide more details or discussion regarding this.\n\n2. In the mining stage of DM2, can DM2 obtain the novel harmful prompts? E.g. this work shows that text-to-image models also suffer from Multimodal Pragmatic Jailbreak prompts [1]. Can the Jailbreak prompts also be learned by the proposed DM2? Some discussion about the limitation of the mining stage will be helpful to the readers.\n\n[1] Liu, Tong, et al. \"Multimodal Pragmatic Jailbreak on Text-to-image Models.\" arXiv preprint arXiv:2409.19149 (2024).\n\n3. How the proposed approach performs if the system is black-box? Is it still feasible?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The research topic in this paper is relevant to the community.\n2. Organization of the paper is relatively clear, even not perfect.\n3. Experimental details are clearly stated."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposes, Dark Miner, an approach designed to address unsafe content generation in text-to-image diffusion models. Unlike existing methods that mainly adjust generation probabilities for known unsafe textual inputs, Dark Miner emphasizes minimizing unsafe generation probabilities for unseen or adversarial prompts. This is achieved through a recurring three-stage process: mining embeddings with high probabilities for unsafe content, verifying them, and circumventing unsafe generation pathways. Some experimental results demonstrate that Dark Miner outperforms six state-of-the-art methods in erasing and defending against unsafe content."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The author claims that the following as one of the contributions:\n“Based on the total probabilities, we analyze the reason why existing methods cannot completely erase concepts for text-to-image diffusion models and are vulnerable to attacks.”\nI did not find a (sub)section for this part, although some discussion can be found in some paragraphs.\nThe reason cannot be found in conclusion section either.\n\n2. Missing related work: The proposed Dark Miner aims to emphasize unsafe generation for unseen or adversarial prompts. One of the previous work [1] also handle unseen or adversarial harmful prompts via building a robust detection space, which is missing in the related work.\n\n[1] Liu, Runtao, et al. \"Latent guard: a safety framework for text-to-image generation.\" ECCV, 2024.\n\n3. Experiments: Previous work show their effectiveness on many harmful concepts. This work only conducts experiments on two. The generalisation of the proposed approach remain to be further verified. The previous approach also shows very different formance on different harmful concepts.\n\n4. The proposed approach cannot handle prompts to generate biased images, e.g. gender bias? Bias is also a harmful concept in responsible text-to-image generation [2,3].\n\n[2] Li, Hang, et al. \"Self-discovering interpretable diffusion latent directions for responsible text-to-image generation.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\n[3] Friedrich, Felix, et al. \"Fair diffusion: Instructing text-to-image generation models on fairness.\" arXiv preprint arXiv:2302.10893 (2023).\n\nMinor issues:\n- The citation format is incorrect across the paper.\n- Confusing annotation, e.g. 0c in Equation 8"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please help to address weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper presents a clear logical flow and is well-written.\n- Experiments demonstrate the effectiveness of the proposed method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on the model editing task of the text-to-image diffusion model. To address the challenges in erasing all unsafe prompts, this paper proposes a method called Dark MINER. This method consists of three steps: 1) mining the potential embeddings related to the unsafe images, 2) assessing whether the potential embeddings effectively induce the model to generate unsafe images, 3) if the mined embedding is effective, this paper conducts the erasing process. In step 3, to protect the generation of safe concepts, this paper also incorporates the regularization in three kinds of concepts: a predefined anchor concept, a null concept, and a concept '-c' that is defined as 'unsafe embedding * -1'."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Please clarify the difference from below existed studies [1,2].\n \n2. In Table.1, the attack success rate of UnlearnDiff in the Violence concept is still 79%. Please provide an explanation for this. Similar phenomenon appear in P4D with the Violence concept (ASR is 46%) and the Church concept (ASR is 49).\n \n3. In Eq.8, this paper proposes three regularization terms to protect the generation of safe concepts, but lacks of related ablation experiments to assess the effect of these three terms.\n\n[1] RACE: Robust Adversarial Concept Erasure for Secure Text-to-Image Diffusion Model, ECCV 2024\n\n[2] Reliable and Efficient Concept Erasure of Text-to-Image Diffusion Models, ECCV2024"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A new method to erase unsafe concepts and defend against attacks for text-to-image diffusion models."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024dark,\ntitle={Dark Miner: Defend against unsafe generation for text-to-image diffusion models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3rUAS7HCKE},\nnote={under review}\n}"
},
"abstract": {
"value": "Text-to-image diffusion models have been demonstrated with unsafe generation due to unfiltered large-scale training data, such as violent, sexual, and shocking images, necessitating the erasure of unsafe concepts. Most existing methods focus on modifying the generation probabilities conditioned on the texts containing unsafe descriptions. However, they fail to guarantee safe generation for unseen texts in the training phase, especially for the prompts from adversarial attacks. In this paper, we re-analyze the erasure task and point out that existing methods cannot guarantee the minimization of the total probabilities of unsafe generation. To tackle this problem, we propose Dark Miner. It entails a recurring three-stage process that comprises mining, verifying, and circumventing. It greedily mines embeddings with maximum generation probabilities of unsafe concepts and reduces unsafe generation more effectively. In the experiments, we evaluate its performance on two inappropriate concepts, two objects, and two styles. Compared with 6 previous state-of-the-art methods, our method achieves better erasure and defense results in most cases, especially under 4 state-of-the-art attacks, while preserving the model's native generation capability. Our code can be found in Supplementary Material and will be available on GitHub."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Text-to-Image Diffusion Models",
"Unsafe Generation",
"Concept Erasure"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/3d1af118dcf2b14ce995c3402e7362f459be3da6.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/f58504f2c8c28270f0ea6f93e59c988381fb1d6a.zip"
},
"title": {
"value": "Dark Miner: Defend against unsafe generation for text-to-image diffusion models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3rnraGvyNr | DiffStroke: High-Quality Mask-free Image Manipulation with Partial Sketches | main | Active | Image manipulation;sketch-based image editing;mask-free;diffusion model | generative models | 5;5;5;5 | 4;4;4;4 | 3;3;2;3 | 2;2;3;2 | 3;3;3;3 | 5 | 4 | 2.75 | 2.25 | 3 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Related to the weakness (2), would swapping I_src and I_tar, i.e. using the deformed image and sketch as input with the original image as ground truth, give better results? \n2. How are the hyperparameters 2.5, 0.25, and 273 in Equation 10 decided? How do the choices on these values affect the results?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The method is based on diffusion models, which are shown to produce higher-quality and more diverse images than GANs. Therefore, it is unsurprising that the proposed method outperforms the previous approaches based on GANs. \n2. The method can edit an image based on partial input sketches without masks. It is more user-friendly than ControlNet and inpainting. \n3. The model is trained using hand-drawn sketches, reducing the gap between training and test-time conditions."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This manuscript presents a method, called DiffStroke, for image editing based on partial sketch inputs using diffusion models. The method consists of two components, the trainable plug-and-play Image-Stroke Fusion (ISF) module and a mask estimator. The ISF modules fuse the sketch input encodings with the source image features. The mask estimator estimates a mask based on the input sketch to prevent alternation in irrelevant areas. Experimental results demonstrate that the proposed method outperforms previous methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The main weakness is the limited novelty. There are multiple existing mask-free sketch-based image editing methods. The main difference between the proposed method and the existing methods is that the proposed method uses the diffusion model while the existing methods are based on GANs. This contribution is not significant enough for a top conference like ICLR. \n2. The proposed method uses free-form deformation to generate training data, limiting the editing capability to simple deformations and resulting in visually unpleasant images. Many result images shown in the paper are not aesthetically pleasing, for instance, the face in Figure 6 and the mug in Figure 2 do not look natural. \n3. The proposed method is not useful in practice. The technique employs a heavy text-based diffusion model while only being able to produce simple deformations. Users can get a similar effect with PhotoShop in a few simple steps"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "**Experimental results**\n\n1.\tIn Fig. 4 and Fig. 5, the predicted masks are suggested to be given.\n2.\tI wonder how the prompts affect the output? Given the same image and sketch, using different prompts will generate the same image or diverse image. Will wrong or imprecise prompts fail the method?\n3.\tHow to calculate the mask without ISF in the ablation study of Fig. 6?\n4.\tAlthough the authors discuss the reason why SketchEdit shows poor results on human faces, it is still valuable to conduct comparisons with SketchEdit on the official images provided by SketchEdit. \n5.\tThe proposed method is claimed to be `plug-and-play`. However, there is no validation about this. For example, what is the performance when applied to controlnet?\n\nSome unclear details or minor issues:\n\n1.\tLine 258, why use $S_{tar}-S_{src}$? This will lead to some -255 values in the images? Should it be $max(0, S_{tar}-S_{src})$?\n2.\tLine 272, $h_i^{src}$ and $h_i^s$ should be $\\mathbf{h_i}^{src}$ and $\\mathbf{h_i}^{s}$\n3.\tLine 286 and Line 295, MLP is misleading, since the implementation of $f_{MLP}(.)$ is a one-layer CNN. It should be $f_{CNN}(.)$\n4.\tLine 318 and Line 373, `Eq. 3` should be `Eq. (3)`\n5.\tLine 327, it is unclear why t=273 is used? In what part of the algorithm it is used? Only for extracting the features from $z_{src}$? Why not using the same noise level as the main network for feature fusion?\n6.\tLine 335, `mask mask` should be `mask`\n7.\tLine 466, `the metrics of the metrics`"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "**Originality** This methods propose a ISF block, to fuse the features of original images and stoke images to predict the editing mask and guide the diffusion-based conditional image generation process. With this design, the users are free from drawing masks, which make this method user-friendly. I think this task is interesting and valuable.\n\n**Performance** Generally, I found this method perform good. Even without masks, this method outperforms many other mask-based methods, and show more flexibilities through more precise mask prediction."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a DiffStroke framework for mask-free image editing with sketches. It designs an image-stroke fusion module to fuse the features of original images and stoke images to predict the editing mask and guide the diffusion-based conditional image generation process. This method is plug-and-play and shows good performance over other mask-based or mask-free image editing methods. The main contribution is that the proposed method is make-free, which can save many users’ efforts during editing."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Experimental results** Although most of the results look reasonable, some cases have limitations. For example, in the Fig. 4, in the last case (church), the unedited part in the left of the sketches is incorrectly edited into a tree. It seems that the predicted mask is not precise enough and would harm the content of the original image in the unedited region. \n\nThen, another limitation of this paper, as also discussed in the paper, is that this paper is designed to only edit the shape of the objects under a small distortion. To me, the proposed methods handle some image morphing or image warping operations, lacking the generative abilities of the original diffusion models. It cannot handle large region changes such as replacing some objects or creating some new objects. I believe this greatly limits the application scenarios of the method.\n\nIn addition, the proposed method is claimed to be `plug-and-play` (Line 120). However, I didn’t find validations about this."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "This technology can be used for creating deepfakes."
},
"flag_for_ethics_review": {
"value": [
"Yes, Privacy, security and safety"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please find the questions in the Weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The motivation is clear, the experiments are thorough, and the presentation is clear and easy to understand.\n- Both qualitative and quantitative comparisons with state-of-the-art methods demonstrate the effectiveness of the proposed methods.\n- There are ablation studies that confirm the efficacy of the proposed module."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents DiffStroke, a mask-free method for high-quality image editing using only partial sketches. The proposed approach includes a trainable plug-and-play image-stroke fusion (ISF) module and a mask estimation module aimed at addressing the limitations of previous techniques. Experimental results demonstrate that DiffStroke outperforms existing methods in both simple and complex stroke-based editing tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Experiments from the ablation studies show that providing masks from estimation is important. Creating a mask is not as difficult as drawing sketches; for example, it can be easily obtained from users by scribbling. Masks can also be derived from sketches by identifying the outline boundaries. What would be the differences between these approaches? It would be beneficial to compare the proposed method by modifying it to accept both masks and sketches as inputs, using the outline boundaries directly obtained from the given sketch as the mask.\n\n- The method is built upon a sketch-based T2I adapter with modifications to the feature infusion. Compared to the T2I adapter, the form of information fusion does not seem fundamentally different; it simply passes the features through transformer layers to fuse them and uses an additional MLP for mask estimation. Could you elaborate on the differences and possibly conduct ablation studies on the structure of the ISF blocks? It would be helpful to see a comparison with minimal changes made by adapting the idea based on the current T2I adapter structure.\n\n- The paper did not provide diverse generation results or failure cases. It would be helpful to show diverse results with the same sketch and prompt, the same sketch with different prompts, or the same prompt with different sketches. Additionally, what would happen without prompts? In the provided results in Figure 6, the prompt appears quite lengthy, making the editing process less user-friendly.\n\n- Some edited results appear less natural than those from the T2I adapter or ControlNet. It would be beneficial to conduct user studies comparing the results from different methods in terms of consistency and naturalness."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see weaknesses above."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "(+) The paper is well-written and easy to follow.\n\n(+) The results are visually pleasing."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a novel plug-and-play module, namely the Image-Stroke Fusion (ISF) module, for mask-free image editing using strokes. The ISF module contains a simple concatenation of the source image features extracted by the T2I-adapter and the sketch information, and its shallowest layer is used for mask prediction. Experimental results are visually pleasing."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "(-) My major concern is that the technical contributions are thin and the methodology is a bit too straightforward. The proposed method relies heavily on T2I-adapter and the proposed Image-Stroke Fusion (ISF) module does not demonstrate many architectual contributions as it contains a simple concatenation of the source image features extracted by the T2I-adapter and the sketch information, and the mask prediction branch is also straightforward. There are not many novel insights here as well (Why the ISF can improve quality? What is the merit of the proposed mask estimation strategy compared to existing ones?). The editing through masking is also a common technique in diffusion-based models.\n\n(-) The data preparation part follows a similar strategy in previous works and also limits the scope of sketch editing to shape changes.\n\n(-) The paper claims that the module is plug-and-play but there are only experiments with T2I-adapter, experiments on several other models are required to justify this claim."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A conditional control diffusion model for high-quaility, mask-free image manipulation with partial sketches."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024diffstroke,\ntitle={DiffStroke: High-Quality Mask-free Image Manipulation with Partial Sketches},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3rnraGvyNr},\nnote={under review}\n}"
},
"abstract": {
"value": "Sketches offer a simple yet powerful way to represent object configurations, making them ideal for local image structure manipulation. Traditional methods often treat sketch-based editing as an image inpainting task, requiring both user-provided strokes and masks, which hinders the user experience. Although recent mask-free stroke-based editing methods are more convenient, they often produce significant artifacts or unintentionally modify irrelevant regions. To overcome these challenges, we propose DiffStroke, a mask-free method for high-quality image editing using only partial sketches. Trainable plug-and-play Image-Stroke Fusion (ISF) modules and an effective mask estimator are developed to address the limitations of previous conditional control diffusion models in preserving style consistency and protecting irrelevant areas. The ISF modules fuse stroke encodings with source image features as input conditions, enabling DiffStroke to control local shapes while preserving overall style consistency. The mask estimator automatically predicts masks to preserve irrelevant regions without the need for manual input. Specifically, DiffStroke blends the estimated clean latent image with the encoded source image using the predicted mask, with the mask estimator trained to minimize the error between the blended result and the latent target image. Experimental results on natural and facial images demonstrate that DiffStroke outperforms previous methods in both simple and complex stroke-based image editing tasks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Image manipulation",
"sketch-based image editing",
"mask-free",
"diffusion model"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/61cb065b01665d54724bf27af028b535f91732d8.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/e6512213a70e67b61888227d52734bb7f8f4cbad.zip"
},
"title": {
"value": "DiffStroke: High-Quality Mask-free Image Manipulation with Partial Sketches"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3sf7SpOYIe | ACUS: Audio Captioning with Unbiased Sliced Wasserstein Kernel | main | Active | audio captioning;exposure bias;multimodal learning | applications to computer vision, audio, language, and other modalities | 3;3;3;5;8 | 4;3;4;4;4 | 2;2;2;2;4 | 2;2;2;2;3 | 2;1;2;2;4 | 4.4 | 3.8 | 2.4 | 2.2 | 2.2 | 0.357217 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "How does the computational complexity scale with audio length and batch size compared to baseline methods?\nHow robust is the method to different audio qualities or noise levels? Was this tested?\nWhat is the impact of different positional embedding choices on the final performance? While rotary embeddings performed best, is there a theoretical justification for this?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The paper introduces an unbiased sliced Wasserstein RBF (USW-RBF) kernel that effectively handles temporal information across modalities while avoiding dimensionality curse issues that affect traditional Wasserstein distances.\n\nStrong Theoretical Foundation: Provides formal proofs for the kernel's properties (positive definiteness, unbiasedness).\nDemonstrates convergence rate for Monte Carlo estimation.\n\nComprehensive Evaluation: Tests on multiple datasets (AudioCaps and Clotho). Uses both automatic metrics and human evaluation\nIncludes detailed ablation studies for various components.\n\nAchieves state-of-the-art performance on multiple metrics"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces ACUS (Audio Captioning with Unbiased Sliced Wasserstein kernel), a novel framework that addresses exposure bias and temporal misalignment issues in audio captioning systems. The key technical contribution is the development of an unbiased sliced Wasserstein RBF (USW-RBF) kernel equipped with rotary positional embeddings, which effectively measures similarity between acoustic and linguistic features while preserving temporal information. Experimental results on AudioCaps and Clotho datasets demonstrate significant improvements over state-of-the-art methods across multiple metrics."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "No analysis of computational overhead from the USW-RBF kernel\n\nUnclear how the method performs on longer audio sequences\n\nWhile ablation studies are included, there's limited discussion of how sensitive the method is to various hyperparameters\nCould benefit from more guidance on hyperparameter selection for new datasets.\n\nLacks detailed analysis of failure cases"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. There are some minor errors, like in line 187, it should be \\( \\nu = \\frac{1}{N} \\sum_{j=0}^N \\delta_{z_y^j} \\), not \\( \\nu = \\frac{1}{M} \\sum_{j=0}^N \\delta_{z_y^j} \\), to make two empirical distributions have the same number of supports. I did not thoroughly inspect every math part in this paper, but I think authors could check the whole paper again thoroughly. Also, in conclusion, it should be \"unbiased kernel\", not \"unbias kernel\". (No worries, they are just minor error, but it is better to correct for clarity.)\n\n2. As I mentioned in weakness part, the reported improvements over baselines appear modest. Could you provide more analysis on how the proposed method performs in more challenging scenarios (e.g., multi-speaker or noisy environments) to better highlight its strengths? If not, do you believe that temporal information is important in audio captioning task? \n\n3. The application area of this work seems limited. Do you have any plans to extend this work for multilingual audio captioning or automatic speech recognition? If so, how might the kernel method adapt to language diversity in audio processing? how you modify the text embedding under multilingual scenario? \n\n4. Since you use stochastic decoding strategies in inference stage, which may lead to high computational costs, and the reported score in your results is not very decent, we might not need such a high-cost method to get a minor improvement. Thus, could you provide more details on the differences in diversity and quality of captions generated by your approach?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The introduction of the unbiased sliced Wasserstein RBF (USW-RBF) kernel, which captures temporal information across audio and text modalities, is an advancement. By accounting for temporal alignment, it addresses limitations in prior contrastive methods that often ignore the temporal structure of audio data.\n\t\n2. ACUS effectively addresses exposure bias—a common issue in captioning tasks—by combining the USW-RBF kernel with stochastic decoding methods. This approach ensures that generated captions maintain diversity and relevance across varying contexts.\n\t\n3. The ACUS framework enhances not only the length and diversity of captions but also their semantic alignment with audio events. By capturing temporal details, it generates more descriptive and meaningful captions, which are validated by both quantitative metrics and qualitative assessments.\n\n4. The paper thoroughly derives and proves the properties of the USW-RBF kernel, reinforcing its validity for multimodal tasks. Additionally, by introducing a practical approach to reducing exposure bias, it offers a methodological contribution that may extend beyond audio captioning to other sequence generation tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel approach for audio captioning to address issues related to exposure bias and temporal misalignment. Here’s a summary of the key contributions:\n1. Unbiased Sliced Wasserstein RBF Kernel (USW-RBF): The authors propose a novel kernel method to accurately measure cross-modal similarity between audio and textual data. This kernel, equipped with rotary positional embedding, captures temporal information more effectively than traditional methods, addressing limitations like dimensionality and temporal distortion.\n2. Mitigating Exposure Bias: ACUS employs stochastic decoding techniques, such as nucleus and top-k sampling, to reduce exposure bias during the inference stage, enhancing caption diversity and quality. This is achieved by leveraging the USW-RBF kernel to improve alignment between generated captions and audio inputs.\n3. Extensive Evaluation and Compatibility: The framework’s efficacy is validated through experiments on two datasets, AudioCaps and Clotho. Results demonstrate improved caption length, lexical diversity, and self-retrieval accuracy, and compatibility with diverse encoder-decoder architectures.\n\nIn essence, ACUS represents an advancement in audio captioning by integrating the unbiased USW-RBF kernel with stochastic decoding, leading to more descriptive and temporally coherent audio captions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. While the paper introduces a promising method, the improvements observed in the current experiments are relatively modest. To more rigorously validate the effectiveness of your approach, I recommend evaluating your model on additional, more challenging benchmarks, which may make this work more convincing if your method reach higher score in these benchmarks, such as:\n\na. SUPERB: This benchmark includes a broad range of speech processing tasks, covering content, speaker, semantics, and paralinguistics. It would provide a comprehensive baseline, helping clarify how well your model generalizes across core speech tasks.\n\nb. Dynamic-SUPERB: This benchmark extends SUPERB with instruction-tuning and zero-shot tasks, pushing models to handle more complex and varied speech processing scenarios. Testing on Dynamic-SUPERB could demonstrate your method’s robustness and adaptability in handling multi-task and instruction-following requirements, offering deeper insights into its generalization capabilities.\n\nc. SpeechCaps: Given the emphasis in your work on speaker-specific and temporal information, SpeechCaps offers a relevant test for multi-talker and speaking style captioning. Its focus on speaker and prosodic information could highlight the strengths of your model in more intricate, real-world audio scenarios, such as multi-speaker dialogues and expressive speech.\n\n2. Authors provide a detailed explanation of the USW-RBF kernel. However, it lacks sufficient details on how this kernel is integrated within the overall model architecture. You can try these to make it better, such as:\n\na. Integration Details: Please provide a clearer, step-by-step description of how the USW-RBF kernel is incorporated into the model pipeline. \n\nb. Diagram or Flowchart: Consider adding a diagram or flowchart that visualizes the integration process, illustrating where and how the USW-RBF kernel interacts with audio and textual embeddings within the architecture."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- How does the performance of USW-RBF compare with other non-Wasserstein-based kernels for audio captioning tasks?\n- What are the potential trade-offs between the accuracy improvements and the computational costs introduced by Monte Carlo sampling and stochastic gradient optimization in ACUS?\n- Why was the rotary positional embedding favored over other encoding techniques, and could alternative embeddings further enhance the results?\n- Why audio captioning? Can the method be useful to other tasks? Audio understanding? Speech Recognition?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper is decently written and easy to follow for an expert. However, I would like to mention it might be difficult for a traditional audio captioning community to read. A bit more background on the 1, the unbiased sliced Wasserstein RBF kernel would have been appreciated. The equations could have been improved by defining the notations better. Fro examples, if I want to read Eqtn 9., I need to find what the notations mean somewhere else in the paper.\n- The problem handled in the paper is new. The approach is also novel. ACUS combines the USW-RBF kernel with stochastic decoding methods like nucleus and top-k sampling to alleviate exposure bias during inference. A lot of work in audio captioning ideally propose new architectures. This paper brings a fresh perspective in the problem space.\n- The evaluation is sound. The 2 usual benchmark datasets are used and it is also combined with human evaluation. The metrics of descriptiveness, correctness, and fluency are good metrics for comparison as ideal benchmark metrics seem to have saturated and require style memorization."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a new framework for audio captioning designed to mitigate exposure bias and improve temporal cross-modal similarity measurement. The authors claim that traditional audio captioning models trained via maximum likelihood estimation face exposure bias, leading to \"degeneration\" in generated captions. This paper introduces the unbiased sliced Wasserstein (USW-RBF) kernel equipped with rotary positional embedding to capture temporal information across acoustic and linguistic modalities, thereby reducing exposure bias. The authors show improvements on benchmark datasets (AudioCaps and Clotho)"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The abstract says\" \"Prior works propose the contrastive method to deal with caption degeneration. However, the contrastive method ignores the temporal information when measuring similarity across acoustic and linguistic modalities, leading to inferior performance.\" -- which contrastive method and how does it ignore the \"the temporal information when measuring similarity across acoustic and linguistic modalities\"? This first line of the paper is very difficult to understand.\n- The Monte Carlo sampling and stochastic gradient optimization may increase computational costs, potentially impacting efficiency in real-world large-scale applications.\n- While I understand that the authors focus on Enc-Dec framework, a good number of baselines were missed for evaluation. ACUS can act as complimentary to most other methods proposed in literature as all methods require an audio encoder and a language decoder (including prefix based architectures). Thus, some baselines were missed. See [1,2] as examples and papers compared in [1,2].\n- The analysis section is just ablations. A deeper analysis section (see questions below) would have strengthened the paper.\n\n\n### Citations\n\n[1] https://ieeexplore.ieee.org/abstract/document/10448030. \n[2] https://ieeexplore.ieee.org/abstract/document/10096877."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. Could the authors provide a more detailed explanation of the differences between the two types of AAC architectures? Additionally, could the proposed method be adapted for application within prefix-tuning structures?\n2. What is the increase in computational cost introduced by the framework? For example, how much additional inference time is required when using stochastic decoding to generate $\\mathcal{B}$ candidate captions?\n3. Considering the advanced reasoning and generative capabilities of large language models, frequently used in AAC tasks, could the proposed approach be adapted to work alongside LLMs to achieve higher-quality captions?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This paper introduces a novel temporal-similarity score, utilizing the unbiased sliced Wasserstein RBF (USW-RBF) kernel with rotary positional embeddings, to mitigate exposure bias in audio captioning models. Unlike prior research, which has not employed the USW-RBF kernel for cross-modal similarity calculation, this study leverages it to capture temporal dynamics more effectively.\n2. The proposed framework is adaptable to a wide range of existing AAC models, with experimental results underscoring its effectiveness in improving model performance.\n3. Comprehensive qualitative and quantitative experiments support the method's efficacy. Ablation studies comparing various similarity metrics further highlight the advantages of the USW-RBF kernel over alternative approaches."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a novel framework tackling the training-inference mismatch in automated audio captioning (AAC) by introducing a temporal-similarity score based on the unbiased sliced Wasserstein RBF (USW-RBF) kernel with rotary positional embeddings. By integrating this score with a stochastic decoding strategy, the approach effectively addresses caption degeneration issues encountered during inference. Experimental results on established AAC benchmark datasets demonstrate notable improvements in model performance, validated through both quantitative and qualitative metrics."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tCombining the USW-RBF kernel with stochastic decoding strategies may lead to high computational costs. For example, at inference time, generating $\\mathcal{B}$ candidate captions through stochastic decoding results in increased computational time. While the paper demonstrates effectiveness, it lacks a detailed discussion of computational overhead in training and inference.\n2.\tThe performance increase with the proposed approach is relatively minor. According to the original paper, EnCLAP-large achieved SPIDEr scores of 49.5 and 27.8 on AudioCaps and Clotho, while the proposed method reached only 50.0 and 27.5, making the claimed improvement less convincing. According to this comparison, the exposure bias is not that important in AAC tasks.\n3.\tThe application scope of this work appears limited, focusing primarily on AAC tasks. The authors have not explored the framework’s performance on other audio-text multimodal tasks, such as audio-text retrieval, and automatic speech recognition. For instance, can the proposed temporal-similarity score enhance the temporal reasoning capability of the CLAP model?\n4.\tThe study does not deeply explore the sensitivity of the framework to key hyper-parameters, such as the coefficient $\\alpha$ in the objective function or the number of Monte Carlo samples $L$ used for the USW-RBF kernel.\n5.\tThe paper dedicates considerable space to explaining the USW-RBF kernel but provides a limited description of how it integrates with the model itself. For example, it’s unclear whether the text embedding is derived from the penultimate layer of the text decoder or from another layer.\n6.\tAlthough the paper separates AAC models into encoder-decoder and prefix-tuning architectures, with experiments performed only on the encoder-decoder type. The difference between these two types of architecture is not substantial. Both approaches essentially share the same structure of an audio encoder and a text decoder.\n(Minor problems - Line 44: the former architecture → the latter architecture)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "As proposing the unbiased kernel is the core technical contribution in the paper, how much gain is there from a biased kernel to a unbiased kernel ?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "This paper propose the unbiased sliced Wasserstein kernel framework to improve the audio captioning performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper is written to solve the exposure bias problem in audio captioning.\nThey propose the unbiased sliced Wasserstein RBF kernel, which is a better cross-modality similarity measure.\nTogether with the contrastive learning method, gains are observed in the audio captioning tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The motivation of the paper does not sound convincing.\nExposure bias is a general problem to auto-regressive networks. \nIt is not a critical problem for audio captioning.\nIn general, exposure bias can be mitigated by a better training of model. \nI believe using a larger decoder will ease the problem a lot.\nSpecifically, I do not see in the paper how much the exposure bias problem is harming the audio captioning performance.\nEven if we regard this as a serious problem, reinforcement learning (RL) should be a popular way to solve it as RL trains the model according to its inference output. The paper doesn't discuss about RL and address the audio captioning problem in a very narrow perspective."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We develop the audio captioning with an unbiased sliced Wasserstien kernel to alleviate caption degeneration"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024acus,\ntitle={{ACUS}: Audio Captioning with Unbiased Sliced Wasserstein Kernel},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3sf7SpOYIe},\nnote={under review}\n}"
},
"abstract": {
"value": "Teacher-forcing training for audio captioning usually leads to exposure bias due to training and inference mismatch. Prior works propose the contrastive method to deal with caption degeneration. However, the contrastive method ignores the temporal information when measuring similarity across acoustic and linguistic modalities, leading to inferior performance. In this work, we develop the temporal-similarity score by introducing the unbiased sliced Wasserstein RBF (USW-RBF) kernel equipped with rotary positional embedding to account for temporal information across modalities. In contrast to the conventional sliced Wasserstein RBF kernel, we can form an unbiased estimation of USW-RBF kernel via Monte Carlo estimation. Therefore, it is well-suited to stochastic gradient optimization algorithms, and its approximation error decreases at a parametric rate of $\\mathcal{O}(L^{-1/2})$ with $L$ Monte Carlo samples. Additionally, we introduce an audio captioning framework based on the unbiased sliced Wasserstein kernel, incorporating stochastic decoding methods to mitigate caption degeneration during the generation process. We conduct extensive quantitative and qualitative experiments on two datasets, AudioCaps and Clotho, to illustrate the capability of generating high-quality audio captions. Experimental results show that our framework is able to increase caption length, lexical diversity, and text-to-audio self-retrieval accuracy. We also carry out an experiment on two popular encoder-decoder audio captioning backbones to illustrate that our framework can be compatible with a diversity of encoder-decoder architectures."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"audio captioning",
"exposure bias",
"multimodal learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a546d3352b9ff0753d387a59d665375778fd2114.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "ACUS: Audio Captioning with Unbiased Sliced Wasserstein Kernel"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3sfOGsBh85 | CerebroVoice: A Stereotactic EEG Dataset and Benchmark for Bilingual Brain-to-Speech Synthesis and Activity Detection | main | Active | Brain-to-speech Synthesis;Voice Activity Detection;Stereotactic Electroencephalograph;Bilingual and Tonal Speech;Brain Computer Interface | datasets and benchmarks | 5;5;5;6 | 4;4;4;4 | 3;3;1;3 | 2;2;3;2 | 2;4;2;3 | 5.25 | 4 | 2.5 | 2.25 | 2.75 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"Yes, Responsible research practice (e.g., human subjects, data release)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1) When participants read words aloud, the movement of their vocal tract can influence the EEG recordings. Could the authors address this by using visual cues and having participants read the cues silently without the movement of the vocal tract?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1) This paper tackles a highly under-explored area, largely limited by the scarcity of curated datasets, by introducing a publicly available bilingual brain-to-speech dataset that holds significant potential for advancing research in this field.\n\n2) The authors propose the MoBSE framework for brain-to-speech synthesis, which achieves improved performance over the FastSpeech 2 baseline."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents CerebroVoice, a bilingual brain-to-speech synthesis dataset featuring stereotactic EEG recordings of Chinese and English words and digits. The dataset is benchmarked for two key tasks: speech synthesis and voice activity detection. Additionally, the authors introduce a novel framework, Mixture of Bilingual Synergy Experts (MoBSE), which employs low-rank expert weights tailored for language-specific decoding tasks. The proposed MoBSE framework demonstrates superior performance compared to the baseline FastSpeech 2 model."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) The authors explain the advantages of stereotactic EEG over ECoG; however, these invasive methods have limited practicality due to the complexity of data collection. It would be beneficial if the authors addressed why surface EEG, a non-invasive alternative, was not used instead in their study.\n\n2) In Subject 1, electrodes were implanted in the right hemisphere, while in Subject 2, they were implanted in the left. However, both hemispheres could contribute to speech production, suggesting that electrodes should ideally be placed in both hemispheres for each participant. Additionally, data collection was limited to only two participants, which restricts the generalizability of the models built with this dataset.\n\n3) The paper uses only one baseline, based on the FastSpeech 2 architecture, which is primarily designed for text-to-speech tasks. However, there are existing models in the literature for synthesizing speech from invasive and non-invasive multi-channel EEG signals, such as [1], [2], and [3], etc. These models could have been used as baselines for more comprehensive benchmarking of the dataset and comparison with the proposed MoBSE framework.\n\n4) Although the paper focuses on speech synthesis and reports using a Hifi-GAN vocoder for generating speech, it does not present any results for the synthesized audio output. To fully assess the quality of the reconstructed speech, it is essential to include both subjective evaluations (such as mean opinion score) and objective metrics (like mel cepstral distortion and root mean squared error).\n\n5) The model architecture presented in Figure 3 is unclear. FastSpeech 2 typically processes text inputs, yet the authors are instead feeding multi-channel EEG signals to the model. The method for obtaining sEEG embeddings from these multi-channel EEG signals is not explained. Additionally, Figure 3 (c) lacks details regarding the structure of the Universal Expert module.\n\n\n\nReferences: \n\n[1] Metzger, Sean L., Kaylo T. Littlejohn, Alexander B. Silva, David A. Moses, Margaret P. Seaton, Ran Wang, Maximilian E. Dougherty et al. \"A high-performance neuroprosthesis for speech decoding and avatar control.\" Nature 620, no. 7976 (2023): 1037-1046.\n\n[2] Kim, Miseul, Zhenyu Piao, Jihyun Lee, and Hong-Goo Kang. \"BrainTalker: Low-Resource Brain-to-Speech Synthesis with Transfer Learning using Wav2Vec 2.0.\" In 2023 IEEE EMBS International Conference on Biomedical and Health Informatics (BHI), pp. 1-5. IEEE, 2023.\n\n[3] Lee, Young-Eun, Seo-Hyun Lee, Sang-Ho Kim, and Seong-Whan Lee. \"Towards voice reconstruction from EEG during imagined speech.\" In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 5, pp. 6030-6038. 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"Yes, Responsible research practice (e.g., human subjects, data release)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Why do the authors argue that other datasets can not be used for VAD if the labels for that task are obtained automatically?\n- The authors assert that the audio quality was assessed and the recordings edited accordingly during the data curation process. Was this task performed subjectively? Who was in charge of this task? \n- Specifications of audio recording equipment were not included, which is relevant to analysis results and prevent biases in case future data fusion tests can be performed.\n- The authors presented independent results per subject. Were these results obtained using a single model trained with data from the two subjects, or were also two models trained (one per patient)?\n- Results regarding LFS, HGA, and BBS signals are confusing. There is no apparent coherence regarding frequency bands or between subjects' behavior. Why do the authors consider that these experiments provide a benchmark in this field, considering the scarcity of subjects, which limits the power of any analysis? \n- The organization of the paper could be improved. The meaning of LFS, HGA, and BBS features and the relevance of their evaluation should be presented in section 5."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper addresses a relevant topic and introduces a new open dataset that can help advance a field far from being consolidated and where the data is highly costly and complex to acquire. It also provides relevant measures that can help objectively evaluate the improvement of further approaches in this field."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a data set consisting of pairs of stereotactic EEG and speech signals recorded simultaneously and a set of experiments in the context of brain-to-speech synthesis aiming to provide a benchmark for further research in this area. The dataset comprises sEEG and speech signals from two participants, and the protocol included the repetition of auditory stimuli in two languages. The paper also analyses the voice activity detection problem from the sEEG signals. The paper uses a similar architecture to that of the FastSpeech2 TTS model but substitutes phoneme embeddings with a sEEG embedding layer and proposes an alternative way to codify the language information into the network through a MLP layer that weights the feature representation of the network depending on a one-hot-encoding vector that indicates one of two possible languages in the dataset."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The general novelty of the work is limited. The introduced dataset is valuable and constitutes a significant contribution to the academic community because of its complexity, but with such a limited number of participants in the study, it is hard to consider this work a valid benchmark for the task. Moreover, the proposed mixture of bilingual synergy experts component is not presented clearly, and the whole pipeline is not well presented."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "While the authors mention addressing ethical concerns related to patient privacy, I still believe it should be reviewed by the ethics committee, as it is very sensitive to make invasive human neural data publicly available. There is insufficient discussion on data storage security measures, access controls, and compliance with data protection regulations such as GDPR."
},
"flag_for_ethics_review": {
"value": [
"Yes, Privacy, security and safety",
"Yes, Responsible research practice (e.g., human subjects, data release)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Did you perform any statistical significance testing to confirm that the improvements of MoBSE over FastSpeech2 are meaningful?\n\nIs there a reason why raw sEEG data is not provided alongside the processed data, allowing researchers to perform custom preprocessing and explore different frequency bands?\n\nHow and why is positional encoding used in the MoBSE framework? Can you provide more insight into its implementation?\n\nAre there any samples of the reconstructed speech available for qualitative assessment?\n\nHow is VAD accuracy measured exactly? I'm trying to figure out if you chose a window of silence vs speech? how long was the window?\n\nHave you considered combining electrode data from both subjects to create a \"super subject\" to enhance coverage?\n\nThanks,"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "-The authors introduce CerebroVoice, a publicly accessible sEEG dataset tailored for neural to speech synthesis and Voice Activity Detection (VAD). This is particularly significant given the scarcity of publicly available sEEG datasets and benchmarks, providing a valuable resource for researchers to compare and validate their methods, fostering progress in brain-computer interface applications.\n\n-By incorporating bilingual data, specifically focusing on a tonal language, Chinese Mandarin, the dataset opens new avenues for research, addressing the complexities associated with tonal languages in brain-to-speech synthesis.\n\n-The methodology for data acquisition is thoroughly and clearly explained, ensuring transparency.\n\n-The authors introduce a Mixture of Experts (MoE)-based framework for neural-to-speech synthesis, which improves bilingual decoding by dynamically organizing language-specific experts. This novel approach outperforms the FastSpeech2 baseline, demonstrating its effectiveness.\n\n-The authors address important ethical concerns related to patient privacy and the sensitive nature of invasive neural recordings, demonstrating a strong commitment to ethical research practices."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces CerebroVoice, a new dataset for bilingual brain-to-speech synthesis and Voice Activity Detection (VAD) using stereotactic EEG (sEEG). It includes recordings from two bilingual participants who read Chinese Mandarin words, English words, and Chinese Mandarin digits. The authors developed a novel method called Mixture of Bilingual Synergy Experts (MoBSE) that uses a language-aware dynamic organization of low-rank expert weights and tested it against the FastSpeech2 baseline, setting a new benchmark for their dataset. They found that MoBSE performs better than FastSpeech2 in producing speech from neural recordings. Additionally, they reproduced three existing VAD methods and established benchmarks for VAD using CerebroVoice. The dataset is publicly available on Zenodo, and the preprocessing code can be found on GitHub."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "-The CerebroVoice dataset is limited by its small size, featuring only two participants and a repetitive, narrow vocabulary, which restricts its generalizability and raises concerns about potential overfitting. Its focus on simple speech synthesis tasks diminishes its flexibility for broader neuroscience research areas such as brain decoding and semantic reconstruction. Additionally, the task design lacks originality, as many similar speech synthesis/reconstruction objectives have been addressed in previous studies [1, 2, 3], Most existing invasive datasets can be requested from the authors while non-invasive ones are generally publicly available, reducing the novelty of CerebroVoice’s contribution to the field. To enhance its impact, the authors could consider expanding the dataset with more participants and a more diverse vocabulary and/or task in future work.\n\n-The GitHub repository lacks implementations of the proposed models, hindering reproducibility and preventing other researchers from building upon the work. It would be beneficial for the authors to include model implementations, training scripts, and detailed documentation in their GitHub repository. \n\n -It is unclear how FastSpeech2 was adapted to produce audio from sEEG signals. The paper does not provide a detailed explanation of the training procedures, architectural changes, or loss functions used in adapting this text-to-speech model for brain-to-speech synthesis. Providing specific details about these adaptations would make the methodology more understandable and reproducible.\n\n-The architecture of the experts within the MoBSE framework is not clearly explained, leaving gaps in understanding how the model functions. It does not specify how many experts were used in the MoBSE framework and lacks ablation studies to justify this choice, hindering the evaluation of the model's components.\n\n-The evaluation primarily uses Pearson Correlation Coefficient (PCC). Including additional metrics like ESTOI (Extended Short-Time Objective Intelligibility) would provide a more comprehensive assessment of speech synthesis quality. This is a very common metric in speech synthesis/reconstruction tasks.\n\n[1] M. Angrick, M. Ottenhoff, S. Goulis, A. J. Colon, L. Wagner, D. J. Krusienski, P. L. Kubben,\nT. Schultz, and C. Herff, “Speech synthesis from stereotactic EEG using an electrode shaft dependent multi-input convolutional neural network approach,” in 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC)\n\n[2] Verwoert, Maxime, et al. \"Dataset of speech production in intracranial electroencephalography.\" Scientific data 9.1 (2022): 434.\n\n[3] Akbari, Hassan, et al. \"Towards reconstructing intelligible speech from the human auditory cortex.\" Scientific reports 9.1 (2019): 874."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"Yes, Responsible research practice (e.g., human subjects, data release)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Line 162: What does “a Python-scripted audio playback and sEEG-marking mechanism” mean? At the onset of audio stimuli (not the participant’s audio), the system sends a marker to ths sEEG recordings to identify the onset of audio stimuli."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "**Significance**: Open-source sEEG speech datasets are rare. Their publishing of the dataset (Line 035) is good news for the community as it will lower the entry threshold for future research. Additionally, they demonstrate how different sEEG features (e.g., LFS, HGA, BBS) affect the performance of brain-to-speech synthesis and voice activity detection. These results may help future works on speech decoding.\n\n**Clarity**: The text has a good structure and is well-written. The figures also help in understanding the method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors introduce a novel dataset, CerebroVoice (publicly available), for bilingual brain-to-speech synthesis and a neural architecture MoBSE, which utilizes a language-aware prior dynamic organization for efficient handling of language-specific decoding tasks.\n\n**Dataset**: The audio stimulus set contains `50` different stimuli, including 30 Chinese Mandarin words, 10 Chinese Mandarin digits, and 10 English words. For each trial, one randomly selected audio stimulus is played; then, the patient is asked to repeat that word (or digit). The dataset includes `1600` trials (i.e., 29 trials per Chinese Mandarin word, 48 trials per Chinese Mandarin digit, and 24 trials per English word). In each trial, two kinds of brain responses are recorded, including listening and reading. Each trial lasts either `4` or `5` seconds and is paired with the corresponding audio recording.\n\n**Model**: The authors propose MoBSE, which is similar to `model ensemble`. MoBSE uses an additional gating module to support the dynamical fusion of the outputs from different experts.\n\n**Experiment**: Previous methods (e.g., FastSpeech2, EEGNet, STANet, EEGChannelNet) are compared. Besides, the authors conducted different ablation studies regarding sEEG settings (sEEG feature, subject, word categories, etc.).\n\n**In summary, it seems like a dataset paper.**"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Major**\n1. Why is common average referencing, instead of laplacian reference, used in BrainBERT[1] (for listening decoding) or bipolar reference used in Du-IN[2] (for speech decoding)? Could you provide brain-to-speech synthesis results based on either laplacian reference or bipolar reference? Although previous studies[3] on speech synthesis use common average referencing + HGA, the speech synthesis task has a trivial solution (the mel-spectrum distribution of human speech is easy to regress). Maybe I’m wrong, but with these additional results, we can gain a deeper understanding of the dataset. Could the authors include the results of brain-to-speech synthesis (i.e., Table 1) baesd on the preprocessed data after either laplacian reference or bipolar reference?\n\n2. How about the results of word classification? CerebroVoice dataset includes at least `24` trials per words, it should be able to evaluate 30-way classification task (i.e., 30 Chinese Mandarin words). Could the authors include results on word-classification tasks (e.g., 30-way on Chinese words, 10-way on Chinese digits, 10-way on English words)?\n\n**Minor**\n1. Line 90: Additional publications the authors should be aware:\n - In Du-IN (https://arxiv.org/abs/2405.11459), their preprocessed dataset is open available.\n\nCould the authors summarize these works in Table 1?\n\n2. Line 99: Additional publications the authors should be aware:\n - In Feng et al. (https://www.biorxiv.org/content/10.1101/2023.11.05.562313v3), they also explore speech decoding based on tonal language (i.e., Chinese Mandarin).\n\nCould the authors summarize these works in the Related Works?\n\n**Reference**\n\n[1] Wang C, Subramaniam V, Yaari A U, et al. BrainBERT: Self-supervised representation learning for intracranial recordings[J]. arXiv preprint arXiv:2302.14367, 2023.\n\n[2] Zheng H, Wang H T, Jiang W B, et al. Du-IN: Discrete units-guided mask modeling for decoding speech from Intracranial Neural signals[J]. arXiv preprint arXiv:2405.11459, 2024.\n\n[3] Chen J, Chen X, Wang R, et al. Subject-Agnostic Transformer-Based Neural Speech Decoding from Surface and Depth Electrode Signals[J]. bioRxiv, 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We present CerebroVoice, the first public sEEG dataset for bilingual brain-to-speech synthesis and voice activity detection. Our MoBSE model shows significant performance improvements. We providing insights for brain-computer interfaces"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024cerebrovoice,\ntitle={CerebroVoice: A Stereotactic {EEG} Dataset and Benchmark for Bilingual Brain-to-Speech Synthesis and Activity Detection},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3sfOGsBh85},\nnote={under review}\n}"
},
"abstract": {
"value": "Brain signal to speech synthesis offers a new way of speech communication, enabling innovative services and applications. With high temporal and spatial resolution, invasive brain sensing such as stereotactic electroencephalography (sEEG) becomes one of the promising solutions to decode complex brain dynamics. However, such data are hard to come by. In this paper, we introduce a bilingual brain-to-speech synthesis (CerebroVoice) dataset: the first publicly accessible sEEG recordings curated for bilingual brain-to-speech synthesis. Specifically, the CerebroVoice dataset comprises sEEG signals recorded while the speakers are reading Chinese Mandarin words, English words, and Chinese Mandarin digits. \nWe establish benchmarks for two tasks on the CerebroVoice dataset: speech synthesis and voice activity detection (VAD). For the speech synthesis task, the objective is to reconstruct the speech uttered by the participants based on their sEEG recordings. We adopt FastSpeech2 as the baseline model and propose a novel framework, Mixture of Bilingual Synergy Experts (MoBSE), which uses a language-aware dynamic organization of low-rank expert weights to enhance the efficiency of language-specific decoding tasks. The proposed MoBSE framework achieves significant performance improvements over FastSpeech2 across all subjects, producing more natural and intelligible reconstructed speech. \nThe VAD task aims to determine whether the speaker is actively speaking. In this benchmark, we adopt three established architectures and provide comprehensive evaluation metrics to assess their performance. Our findings indicate that low-frequency signals consistently outperform high-gamma activity across all metrics, suggesting that low-frequency filtering is more effective for VAD tasks. This finding provides valuable insights for advancing brain-computer interfaces in clinical applications. \nThe CerebroVoice dataset and benchmarks are publicly available on Zenodo and GitHub for research purposes."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Brain-to-speech Synthesis",
"Voice Activity Detection",
"Stereotactic Electroencephalograph",
"Bilingual and Tonal Speech",
"Brain Computer Interface"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/d9e45815b32e825d17ebd876b5955b2eee0bfcfc.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/8543ad10a7556610526d084d5c7fcb17fc562103.pdf"
},
"title": {
"value": "CerebroVoice: A Stereotactic EEG Dataset and Benchmark for Bilingual Brain-to-Speech Synthesis and Activity Detection"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3tukjsVyrE | Scaling Speech-Text Pre-training with Synthetic Interleaved Data | main | Active | large language models; speech language model; spoken chatbots | foundation or frontier models, including LLMs | 5;5;8;8;8 | 4;2;4;4;3 | 2;3;3;3;3 | 3;3;4;4;3 | 1;2;3;4;3 | 6.8 | 3.4 | 2.8 | 3.4 | 2.6 | 0.408248 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weaknesses.\nHow are the speech and text tokens interleaved to form training samples? What are the details of this data creation process?\nHow does the model benefit from interleaved speech and text modalities? \nHow do you deal with the different sampling rates and information granularities between speech and text tokens during the process?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The ASR-based speech tokenizer achieves semantic information preservation and decent speech audio reproduction at the same time.\n2. The low-bitrate speech tokenizer and the text-to-token model effectively use the existing large amounts of text data to synthesize large amounts of speech tokens, which saves resources to collect large amounts of speech audio data and improves the language model's speech performance after pretraining."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a speech-text pretraining process for scaling speech-language model training without acquiring large amounts of speech audio data. The process mainly includes an ASR-based low-bitrate speech tokenizer and a text-to-speech-token model to produce large quantities of speech tokens for speech-text pertaining. The pre-trained model is fine-tuned on spoken dialog datasets and shows competitive performance compared to existing SOTA models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The weaknesses are mainly in terms of paper writing and presentation. \n1. The paper mentions \"we are first to use supervised semantic tokens for SpeechLMs\". However, one of the baselines, Mini-Omini also uses a whisper-based speech tokenizer. \n2. The details on how the speech and text modalities are interleaved are missing. \n3. As an important part of the process, the details of the text-to-token model are missing—for example, model architectures, training schemes, etc.\n4. The large amounts of speech tokens generated by the text-to-token model are still from existing datasets and speech-synthesized audio from text. How is this process different from generating speech tokens from synthesized speech audio using large amounts of text? For example, llama-omni also uses cosy-voice to synthesize speech audio to augment training data. What's the innovation here between text-to-speech-to-token and text-to-token?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "While the whisper ASR model has achieved excellent performance on a range of tasks, it does have its limitations, especially with regards to unseen or low-resource languages. That is not an issue for this paper which seems to focus on English (although there was quite a bit of Chinese data used as well). Have the authors given any thought as to how to extend this work to cover more languages?\n\nAre there any plans to open source the tokenizer, text-to-token model, or the speech LM itself?\n\nAlso, it would be nice if the authors could describe the amount of computation required to pretrain the speech LM."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This paper is a nice contribution to the very hot topic of speech LMs. By developing an effective speech tokenizer and text-to-tokenizer model the authors are able to create a very large speech language model that produces impressive results on a wide range of tasks. The authors perform extensive experiments and ablation studies on the speech tokenizer, speech generator (decoder), and the speech LM. The model is able to achieve strong performance on both spoken language modeling and spoken question answering tasks. Finally, when fine-tuned on dialogue data, the model does well on a spoken chat-bot task."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper is about scaling up data to train large speech language models. The authors present a method for tokenizing speech using the Whisper encoder and demonstrate their tokenizer retains semantic information as well is fine-grained information for good quality speech generation. They also describe a method for training a text-to-token model. With these, they are able to tap into large resources of text data to generate synthetic training data, which they interleave with other conventional text and speech/text sources to pre-train a speech LM. By fine-tuning the LM on a dialogue corpus they demonstrate a speech chat-like capability. Extensive experimentation is performed, and the speech pretrain method does quite well on a range of tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Although this is not necessarily a weakness, this paper seems very strong on the engineering side and a little weaker on the novelty side of things. The recipe the authors put forward consists of three separate steps 1) tokenizer, 2) text-to-token model 3) pretrain speech LM. While the authors build a strong tokenizer based on the Whisper model, the approach is not especially novel as it is built on top of a strong speech recognition model. Likewise the use of a TTS corpus to learn a text-to-token model is a nice approach, but has been done before to learn similar kinds of models (e.g., Hsu et al., Text-Free Image-to-Speech Synthesis Using Learned Segmental Units, 2020). Finally, the interleaving of different kinds of text and speech data to pretrain an LLM with an additional token vocabulary is not especially novel. However, while these points are arguably true, I find it impressive that the authors have put all the pieces together to create a very strong speech LM."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Please consider clarifying the following questions in the final revision, (apologies if I missed some details that are in the paper):\n\nSection 2.1 and table 1:\n- I’m assuming that you measured WER in content/semantic retention - is that right?\n- How did you measure the ground truth (Whisper?)? \n- The title says ‘Speech Tokenizer Results” but the Quality also measures your speech decoder.\n- Is the speech decoder single/multi-speaker? Please add information regarding speaker identity preservation. \n- What was the training data of the speech decoder? (single or multi speaker?)\n- Missing citations (e.g. Expresso). \n- LS stands for Librispeech? this isn’t stated.\n\nTable 2:\n- How do you measure WER? (those tokens into the bottleneck layer of the quantized Whisper? Decode the audio using the speech decoder and apply the regular Whisper?)\n\nSection 2.2:\n- Regarding \\eta, please clarify if that is the ratio of text that is replaced or the ratio that audio will take out of the final sequence. I assume it is the first based on section A.1 but better for it to be clear in the main text too.\n\nSection 2.3.2\nIn the main text, please add that you used GPT-4 to filter examples, shorten the responses, and avoid outputting text that cannot be read aloud (help the reader understand it from the main text). Also consider adding optimization details on the finetuning step (lr, batch size, etc')\n\n\nSection 3.1: \n- Please help the reader and add in the main paper that the GPT-4 content quality is on a scale of 1-10.\n- The response of the SpeechLM was converted to text before it moved to GPT-4, right? How was the conversion performed? (Quantized whisper? your SpeechLM? Decoded and then used regular whisper?)\n- Two comments regarding the ASR-WER metric in Table 4. First, it is more of a content quality metric rather than a speech quality metric. Secondly, as there are many ways to answer a question correctly, I suggest moving to ASR-ROUGE or ASR-BLUE instead.\n\nTable 3: \n- what’s the difference between \\emptyset and “-“?\n\n\nComments:\n\n- Fig1a is hard to understand at first glance - specifically, that the yellow tokens are replaced with speech tokens. Adding a color legend (Yellow: SpeechTokens, Cyan: TextTokens) would make it easier to understand.\n\n- The second row in Fig 2a needs to be clarified. “Text—Audio token—>Text-to-token LM” can be perhaps: “Text — (TexttoTokenLM)—> Audio Tokens”.
\n- Semantic tokens determine the pacing of speech, which is a part of the speaker’s prosody. Your synthetic audio tokens are not conditioned on a speaker, so you are likely to get the semantic tokens of an average speaker. It is fine overall.\n\n- Regarding audio tokenization - consider reducing the dimension (e.g. from 1024 to 8/16) before quantization to prevent codebook collapse.
\n- Moreover, I suggest applying text-tokenization algorithms (BPE?) on the speech units, to produce variable-length representations with a more balanced distribution, and further shorten the audio sequence.\n\nTypos:\n- Line 514: AudioLLM->AudioLM\n- Line 516: Moshi Citet->citep"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Supervised speech tokenizers are a great way to distill the content from audio. Audio is high-dimensional, and using text and a low bitrate bottleneck to focus on content is a good idea, suitable for SpeechLMs. \n- Training a “TTS” model to generate synthetic audio tokens is interesting - as it doesn’t require generating the final audio (high-bitrate, compute-intensive, issues with OOD synthetic data).
Instead, they generate latent audio tokens that focus on content. \n- the interleaving (replacing spans of text with its synthetically produced speech tokens) is interesting as it forces the model to learn alignment between text and audio tokens. It was also shown to be effective in practice.\n- The ability to perform text-guided response (which is a kind of chain-of-thought) is interesting.\n- The ablation study was done well."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper tackles the important problem of closing the gap between textLMs and SpeechLMs, to enable a spoken conversation with an AI model.\nThe paper leverages the recently proposed *supervised* semantic tokens (introducing a discrete bottleneck in ASR models) which show better alignment with text. \nMoreover, they train a text to audio tokens to enable the generation of synthetic audio tokens based on high-quality texts. \nThey suggest randomly replacing textual tokens with the corresponding synthetic speech tokens (resulting in an interleaved sequence), which helps to align the text and audio tokens. \nThey train large SpeechLMs on diverse text/audio inputs (audio only, text only, interleaved, [text,audio] and [audio,text]), and show convincing results on SLM, SQA. \nThey perform supervised finetuning on a proprietary (?) spoken dialogue dataset, and evaluate their model as a spoken chatbot using GPT-4 as a judge."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Several methodological evaluation details are missing (what was measured and how was it computed), mostly in Section 2.1 and Table 1 (See questions).
Whenever you report some metric with an intuitive non-exact name (e.g., Content Preservation - LS), you should explain somewhere it more precisely (e.g., Content Preservation: We run our quantized whisper on the LS (LibriSpeech) dataset to generate text and report the WER to the GT transcript)
I understand that there’s a space limitation, but this is important. \nI've listed some specific details I found missing in the Questions section. I suggest adding a short sub-paragraph that describes the evaluation methodology (defines all datasets+metrics being used) or adding those details into the main text within the relevant sections. If space is an issue, you can add those into an appendix section.\n\n- Currently there's no sample page (unless I missed something). Consider creating a sample page with samples on speech continuation (audio prefix, audio GT continuation, and the model's audio continuation). Also consider adding examples of spoken question answering (audio question, audio GT answer, the model's prediction). Examples from the spoken chatbot evaluation would also be great. Moreover, you can visualize how the interleaved samples sound like (paragraph with audio tokens that were decoded in it)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1 Why is the method pre-trained on Chinese data?\n2. Why is GPT-4 used for scoring?\n3. Why is whisper used as the encoder? did it perform better than other encoders?\n4. It is stated that the model can do streaming, was this evaluated?\n5. Why are the ablation also done on a 1.5B model?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. A novel approach for expanding the training corpora of SpeechLMs.\n2. The method is easy to implement, and thus can be expanded to other methods. \n3. State of the art results on sTopic-StoryCloze, sStoryCloze, Web Questions, Llama Questions and TriviaQA."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a method for scaling SpeechLMs using a new pre-training scheme called \"Synthetic Interleaved Data\". In this scheme, a text to token lM is first trained on supervised data, and then used to expand text-based datasets by predicting the speech tokens directly from the text data. For pre-training, only spans of text are converted to speech tokenization and text and speech are interleaved with one another. After this pre-training, the model is trained in the usual SpeechLM format. The strength of this approach is the ability to generate a large-scale dataset from text-corpora. Furthermore, this method shows strong results on speech-understanding and generation datasets.\n\nOverall, the method presented in this paper and novel, and has an interesting contribution. On the other hand, the writing is unclear and the evaluations are somewhat lacking. I thus recommend to borderline reject this paper."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The writing is unclear for most parts of the paper. While Synthetic Interleaved Data is the main contribution of the paper, it is not clear what this means from the abstracts and introduction. Furthermore, the main explanation of this is just a small part of the paper. I would suggest reducing the length and condensing the section regarding speech tokenization (as this is a well established concept) and increasing the amount of detail in the section regarding Synthetic Interleaved Data. I would also suggest adding a better summerization of this concept in the introduction.\n2. While there is a good ablation analysis, there aren't any explanations for why the architectural / training parameters where chosen as they where. I suggest to add a dedicated subsection or add this into the methodology section where the parameters are introducted.\n3. The training datasets are lacking in clarity:\n- For table 1, is it unclear on which dataset it was evaluated and why MOSNet scores are so low. \n- It is unclear what Supervised speech-text data are used to train the model.\n- It is unclear what datasets areused to train the text to speech tokens model. \n- It is unclear what datasets are used to fine-tune the tokenization encoder and decoder.\nThese should be added in the section specifying the training pipeline or in a dedicated table / figure.\n4. The experimental results are lacking in clarity:\n- the origin of the baseline numbers in all tables is lacking, are these from other papers or from independently evaluating? I would suggest adding these directly to the table or in the caption.\n- In table 3, speechGPT and Spectron are speech to speech methods, while the results are stated in speech to text.\n- in Table 1 MOSNet was used while in table 4 UTMOS is used. This reason for this should be explained in the paper or have uniformity between them. \n5. The paper is lacking some evaluations:\n- Human evaluations of speech quality, such as MOS or MUSHRA evaluations where humans will rate the speech quality of the proposed method compared to the baseline.\n- Evaluation on other tasks, such as speech continuation, reconstruction and TTS for the full method. Speech continuation and Reconstruction results at least should be added, while TTS might be left for future work."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. I'm curious if there is any additional filtering process for the text input into the text-to-tokens model, as there are many texts that cannot be synthesized, such as code or mathematical formulas.\n\n2. Why is it required for the encoder and decoder to be causal when training the speech tokenizer? During the inference stage, speech are segmented into 2s-chunks for inference, which does not require a streaming speech tokenizer."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This work demonstrates the need to use interleaved speech-text data for cross-modal pre-training, paving the way for more effective speech-text pretraining. Previous pretraining methods have typically relied on paired ASR or TTS data, which is limited in scale; or independently utilized unsupervised speech and text data, which cannot model the dependency between the two modalities. I believe that this work makes a great contribution to the filed of speech-text pretraining.\n\n2. Although the use of interleaved data has been proven effective in the field of image-text pretraining [1], it has not been explored in the field of speech-text pretraining. In contrast to the vision field, web data can naturally form interleaved image-text data, but it is difficult to collect real data with interleaved speech and text. This work proposes a novel method to synthesize pseudo-interleaved data using a text-to-tokens model, and through thorough experimentation, demonstrates the effectiveness of synthesized data and observes that scaling synthesized data continues to provide benefits.\n\n3. This work is very solid and well-motivated. The paper is well-structured, with a clear presentation of the methodology, experiments, and results. This work also reports state-of-the-art performance in speech language modeling and spoken question answering.\n\n[1] Chameleon team. Chameleon: Mixed-Modal Early-Fusion Foundation Models. arxiv: 2405.09818."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work present a novel approach for scaling speech-text pre-training by leveraging large-scale synthetic interleaved data derived from existing high-quality text corpora. This work utilized existing text-to-speech (TTS) datasets to train a text-to-token language model, which is used to synthesize 600B tokens of speech-text interleaved data. Experiments have demonstrated the effectiveness of incorporating interleaved speech-text data, which can effectively align speech and text. Furthermore, this work constructs a speech instruction dataset, SpeechDialog-90K, to fine-tune models into a chatbot model, which can directly generate speech responses without intermediate text response and significantly improve the previous SOTA."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. There is a lack of performance evaluation for the text-to-tokens model. For example, after converting a piece of text into tokens and then decoding it into speech using a vocoder, what is the ASR-WER of the resulting speech? This result is necessary to demonstrate the semantic representation capability of the tokens generated by the text-to-tokens model.\n\n2. Based on my experience, tokens generated by text-to-models lacks diversity, and the speech instruction dataset SpeechDialog-90K in the SFT stage is also synthesized by TTS, so I am concerned about whether the model can understand real speech input. I checked the evaluation datasets in this work, all of which were synthesized through TTS, lacking evaluation on real speech input (such as AIRBench [2]).\n\n3. The quality of the output speech is not satisfactory, as evidenced by the poor ASR-WER in Table 4. In comparison to llama-omini, which was only trained on 100 hours of speech data, this model was trained on a much larger scale of 700k hours of speech data. The author needs to provide a reasonable explanation for why the ASR-WER is so poor.\n\n[2] Yang, Qian, et al. AIR-Bench: Benchmarking Large Audio-Language Models via Generative Comprehension. arxiv 2402.07729."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024scaling,\ntitle={Scaling Speech-Text Pre-training with Synthetic Interleaved Data},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3tukjsVyrE},\nnote={under review}\n}"
},
"abstract": {
"value": "Speech language models (SpeechLMs) accept speech input and produce speech output, allowing for more natural human-computer interaction compared to text-based large language models (LLMs).\nTraditional approaches for developing SpeechLMs are constrained by the limited availability of unsupervised speech data and parallel speech-text data, which are significantly less abundant compared to text pre-training data, thereby limiting the scalability of SpeechLMs as LLMs.\nWe present a novel approach for scaling speech-text pre-training by leveraging large-scale synthetic interleaved data derived from existing high-quality text corpora.\nOur method employs a supervised speech tokenizer derived from an automatic speech recognition (ASR) model (e.g. Whisper) by incorporating a vector-quantized bottleneck into the encoder. In this process, we create tokenizers with various sampling rates ranging from 50Hz to as low as 6.25Hz. This supervised training approach results in discrete speech tokens with strong semantic preservation even at lower sampling rates, while still maintaining speech reconstruction quality.\nBy synthesizing speech-text data from existing text pre-train corpora with a text-to-token language model and scaling our pre-training to 1 trillion tokens, we achieve state-of-the-art performance in both speech language modeling and spoken question answering, improving performance on spoken questions tasks from the previous SOTA of 13\\% (Moshi) to 31\\%.\nWe further demonstrate that by fine-tuning the pre-trained model with speech dialogue data, we can develop an end-to-end spoken chatbot achieves competitive performance comparable to existing baselines in both conversational abilities and speech quality, even operating exclusively in the speech domain."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"large language models; speech language model; spoken chatbots"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a570512a55f674448429815bfac08a506151c1f7.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Scaling Speech-Text Pre-training with Synthetic Interleaved Data"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3usdM1AuI3 | BRAID: Input-driven Nonlinear Dynamical Modeling of Neural-Behavioral Data | main | Active | Deep learning;Dynamic modeling;Sensory stimuli;RNN;Intrinsic;Behavior | applications to neuroscience & cognitive science | 3;6;6;6 | 5;4;4;3 | 2;3;4;3 | 2;3;3;3 | 3;3;3;2 | 5.25 | 4 | 3 | 2.75 | 2.75 | -0.816497 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "None noted / see above."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The method dissociates neural-only, vs. behavior-only, vs. shared neural-behavioral spaces.\n- BRAID has the ability to take in measured inputs. However, this is not providing any conceptual advance from existing methods, since they are simply fed into the RNNs with an additional input transformation in the form of K."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "BRAID aims to distinguish the effect of measured inputs towards neural dynamics, while dissociating shared neural-behavioral dynamics, neural-only dynamics, and behavior-only dynamics. It does so by training a series of recurrent neural networks: (1) Stage 1 trains a shared recurrent neural network that outputs both neural activity and behavior for (a) 1-step ahead and (b) multi-step ahead prediction, (2) Stage 2 does similarly but for neural dynamics only, and (3) Stage 3 does so for behavioral dynamics only. The authors show better correlation coefficients with the neural and behavioral activity in validation datasets as compared to some baseline comparisons."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- BRAID is very similar to DPAD (Sani et al., 2024), but with the addition of measured inputs, behavior-only dynamics, and an additional network that predicts m-steps ahead. None of these additions are conceptual advances, and are extremely straightforward additions to an existing method.\n- The paper claims that \"BRAID disentangles [input] dynamics from intrinsic dynamics\", however the contribution of the inputs is not analyzed further at all - can one effectively disentangle input dynamics from intrinsic dynamics via this approach?\n- The modeling strategy is multi-stage and quite involved, with multiple RNNs being trained without fully going into the utility of each one. The 'RNN_{fw}' models seem to be forecasting, but why is it necessary to have a separate RNN for forecasting when the 'RNN_1' model is a dynamical model that should be in theory capable of predicting m-steps forward in time?\n- The R^2 should be reported throughout the paper instead of Pearson's; the R^2 is more standard in this field, and takes into account the predictability using the mean value of the signal.\n- There is no attempt at interpretability of the underlying dynamics and the contribution from different sources as identified by this method.\n- While the authors show that BRAID performs with higher behavior reconstructions than TNDM as shown in the Appendix, this is very much to be expected since TNDM does not optimize separately for behavior reconstruction, as BRAID does. Similarly, DPAD does not either (and does not take in inputs). However, these are very simple to add to both of these methods, and thus do not provide fair comparisons in their existing form."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* Could the authors attempt to make the writing clearer with regards to explaining the model and the various training stages? Perhaps a summary as a table might help.\n* Is it possible to show results when ablating for the second and third (post-hoc) stages of training? Apologies if I've missed this. Also, could the authors comment on what happens when just training the model end-to-end?\n* Could the authors consider experiments that show the scaling performance of the model vs number of neurons, and attempt to compare with LFADS (and CEBRA) if possible?\n* Could the authors comment on multi-session models, generalisation to unseen sessions?\n* Apart from neural predictivity and behaviour decoding, could the authors perhaps use a similarity metric to explicitly compare the learned and true dynamical systems (e.g. using [Dynamical Similarity Analysis](https://openreview.net/forum?id=7blSUMwe7R))? This should be fairly easy for the synthetic tasks, and should be possible in case of the neural data as well.\n* It would be interesting to see the model's performance on a dataset involving multiple brain regions, but I understand that this would take more time to run and depend on the availability of a dataset.\n* I spotted a potential typo on Line 073 (\"prepossessing\" -> \"preprocessing\"?), and another minor issue on Line 111 (\"intrinsic representation of dynamic.\" -> \"dynamics\"?)."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* The method is well-motivated given prior work, and the causal formulation makes it amenable to real-time inference, which is a strong asset.\n* The figures are neat and the experiments are comprehensive. The writing is mostly clear, but I have some comments on explaining the method in a slightly clearer manner (see Weaknesses).\n* The method performs well not only on synthetic tasks designed to show its efficacy over several baselines, but also on decoding real neural data."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a new deep learning method to jointly model neural and behavioural data by explicitly modelling task inputs, in order to better model and disentangle input effects and intrinsic neural dynamics that are predictive of behaviour. The learning of the intrinsic dynamics is enabled by the use of a forecasting objective, i.e., $m$-step-ahead neural and behavioural prediction. The method involves 2 (or 3) RNN models, and optimisation is done in multiple stages: a pre-processing stage to filter out behaviours that are not relevant to recorded neural dynamics (used instead of actual behaviour for training), a stage to learn the neural dynamics predictive of behaviour, and a stage to learn any residual neural dynamics. The proposed method outperforms several baselines at predicting neural activity and decoding behaviour, and is also amenable to real-time predictions due to its causal formulation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The model architecture and the learning stages are quite complicated, and the writing makes this hard to understand in some places. If I've understood this correctly, there are mainly two RNNs: one to predict the next timestep neural activity given current activity and observed inputs, and another to predict activity $m$ timesteps into the future using just observed inputs. Each of these RNNs' parameters are split into up to 3 subsets, which are optimised sequentially: one to learn behaviourally relevant neural dynamics, one to learn any residual dynamics, and another to learn behaviour that is not encoded in the neural dynamics. This is in addition to an RNN that preprocesses inputs.\n\n While this preprocessing RNN has been ablated for, can the authors comment on how one identifies in practice whether or not to use the second and third stages of training (RNN2 and RNN3), which were mentioned to be optional? In general, some additional clarity in the writing here would be appreciated.\n\n* It would be important to see how the method scales with the number of neurons – based on the details in the appendix, it seems like the maximum dimensionality explored here (in the neural data case) is around 45. Perhaps the authors could run an experiment comparing decoding performance and neural predicitivity for differing numbers of neurons from the same dataset to show this (and also comment on the time taken to train BRAID).\n\n* From the experiments it seems that BRAID is mainly a single-session model. It is well-known that there can be a lot of variability in neural activity across sessions as animals learn to perform the task better or due to some representational drift. This does not seem to be addressed in the paper as far as I can see, but could the authors comment on how BRAID generalises to unseen sessions, and also specifically for the later parts of a session when training on the initial parts (one of the 5 folds)?\n\n* While the experiments are comprehensive and baselines have been compared against, I think comparisons with LFADS (and if possible, CEBRA) could be useful here – both these approaches seem slightly less involved in terms of training complexity, but would represent baselines when ablating for explicit input modelling (LFADS) or explicit dynamics modelling (CEBRA). The idea here is also that LFADS is designed to infer inputs to the dynamical system, so it might perform better than the extended TNDM baseline included currently."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Some of the following questions relate directly to the weaknesses raised above. \n- How does your predictor relate to sequential posterior in variational inference?\n- Monkey experiment: What do x^1 and x^2 look like in this task? Is there any non-encoded activity x^3?\n- Could you provide more details on the \"automatic\" selection (L313)? Is it simply picking the best performing?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- The work tackles a very relevant question in the field of statistical neuroscience of disentangling the relationships between behavior, neural activity and inputs, and shows originality in doing so with careful modeling. \n- The modeling is well-formulated in the main text, and made clear for the reader in the appendix and through provided code.\n- The decomposition into multiple stages and sub-components to encourage learning behavior-relevant activity is interesting and novel (to my knowledge).\n- The authors showcase their model on synthetic and real experimental data, showing strong performance in both. \n- The authors help support the significance of their results by (1) comparing them against many baselines and extensive ablations of their model, and (2) performing many runs, providing error bars for all numerical results. \n- The metrics are meaningful and bypass common non-identifiability problems (such as considering the eigenvalues of A)\n- The figures are clean and easy to understand."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors tackle the problem of disentangling intrinsic from input-driven neural dynamics underlying behavioral measurements. They do so through a novel nonlinear model called BRAID. Their approach is to conceptualize the intrinsic as the generative (i.e. forward) dynamics, and the input-driven as the (posterior-) predictor dynamics. The authors model each of these components with nonlinear DNNs for flexibility, allowing each to be learned. Importantly, they devise a multi-stage training procedure that prioritizes learning behavior-relevant dynamics, with neural reconstruction placed second. They show how this approach can help disentangle the neural dynamics directly relevant to behavior, in both synthetic experiments and monkey motor reaching neural activity."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I am putting a score of 6 (marginal accept) but I would be willing to increase it if the authors can help address my weaknesses/questions, and in particular the \"major\" ones below. \n\nMajor:\n- I am not convinced by the claims of real-time inference in the discussion, and the lack of surrounding literature on the predictor components of the model. Variational inference approaches for sequential models similarly model the (sequential, i.e. filtering) posterior and optimize it jointly with the generative model in the ELBO. I believe the similarities could be actually quite exact, in which case a fairer depiction of the relationship with VI is necessary. On that note, Table 1 refers to your method as pure LL optimization, but perhaps it could be more easily interpreted through an ELBO objective.\n- Most of the results are numerical, and the paper lacks a bit in alternate results such as posterior trajectories or visual reconstruction. \n- In line with the previous comment, the monkey experiment results are numerical and would benefit from some post-training analysis if the goal is to show the model as a modeling and analysis tool. \n\nMedium:\n- Appendix should be for additional but not necessary details to understand the paper. One example of this is the simulations in section 4.1. The notation from L312 follows the appendix where you introduce $f_{C_z}$ and $\\nu, \\bar\\nu$ -- I would make it consistent with eq. (1) or add eq (A.11) to the main text.\n- Identifiability: the authors discuss behavioral- vs neural-relevant activity, but having nonlinear A, Cs are another type of inter-dependence, which could be discussed further.\n- Monkey experiment: numerical performance results are good but sometimes lack transparency in their presentation. For instance, U-BRAID does do better on neural, but that is by construction and does not put BRAID in any lesser light. The corresponding paragraph (3 of section 4.2) however does not acknowledge this better performance. Similarly, the authors refer to mmPLRNN as having an \"unfair advantage\" (L451). I would remove this.\n- The relationship with probabilistic formulations is skimmed but still alluded to with the ELBO/LL in Table 1. I would expand further on these ideas. \n\nMinor:\n- Using $\\cdot$ (\\cdot) instead of $.$ (dot) is more standard for place-holder variables in functions (L191, L312, L1015)\n- Dependency on x^1 on Stage 2 could be made more explicit in the text\n- Table 2: bold entries per metric would be more representative"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.\tIn Figure 1, should there be a connection between stage 2 and behavior Z_k if you are learning C_z(2) in Stage 2 2b? If so, what is the difference between X_k(1) and X_k(2) since both are connected to neural activity via [C_y(1), C_y(2)] and to behavior via [C_z(1), C_z(2)]?\n2.\tIn equation 1, the model can learn behavior dynamics that are predictable from the input but are not encoded in the recorded neural activity, but how do you make sure that the behavior prediction is not dominated by the input? In Figure 3a, there is a high correlation between your input target position and your behavior cursor position and velocity. If so, can the model learn any information encoded in neural activity?\n3.\tThe model also has an input term in modeling neural activity y, what does this input mean? Because you have an input in latent space that encodes intrinsic contribution, then you also have the same input to encode input-driven contribution. \n4.\tHow do you define your initial states x0?\n5.\tI am curious, how do you ensure that the model learned all encoded information and the residual is the non-encoded information since the model is nonlinear with high flexibility? \n6.\tFollowing Q5, you mentioned you trained the model until convergence, could you show your learning curve against epochs with zooming in the last epochs if necessary?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "BRAID considers input-driven neural and behavioral dynamics and dissociates them. The authors performed ablation studies to shown which parts of BRAID contributed to the neural and behavior forecasting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduced a novel framework, BRAID, modeling nonlinear neural and behavioral dynamics with external input. This model dissociates behaviorally relevant neural dynamics, neural specific and behavioral specific dynamics, and outperforms baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Although the model has been shown to be highly efficient in modeling neural spiking data, it has not been tested on other modalities, such as widefield calcium imaging.\n2. Although the authors mentioned that their model is not to infer unmeasured input, I still think this may be a weakness of this model, because (for example) the neural and behavioral dynamics can be encoded by unmeasured input."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024braid,\ntitle={{BRAID}: Input-driven Nonlinear Dynamical Modeling of Neural-Behavioral Data},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3usdM1AuI3},\nnote={under review}\n}"
},
"abstract": {
"value": "Neural populations exhibit complex recurrent structures that drive behavior, while continuously receiving and integrating external inputs from sensory stimuli, upstream regions, and neurostimulation. However, neural populations are often modeled as autonomous dynamical systems, with little consideration given to the influence of external inputs that shape the population activity and behavioral outcomes. Here, we introduce BRAID, a deep learning framework that models nonlinear neural dynamics underlying behavior while explicitly incorporating any measured external inputs. Our method disentangles intrinsic recurrent neural population dynamics from the effects of inputs by including a forecasting objective within input-driven recurrent neural networks. BRAID further prioritizes the learning of intrinsic dynamics that are related to a behavior of interest by using a multi-stage optimization scheme. We validate BRAID with nonlinear simulations, showing that it can accurately learn the intrinsic dynamics shared between neural and behavioral modalities. We then apply BRAID to motor cortical activity recorded during a motor task and demonstrate that our method more accurately fits the neural-behavioral data by incorporating measured sensory stimuli into the model and improves the forecasting of neural-behavioral data compared with various baseline methods, whether input-driven or not."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Deep learning",
"Dynamic modeling",
"Sensory stimuli",
"RNN",
"Intrinsic",
"Behavior"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/fe1071b2c453fa5e7cd35c024ad10a45fc21f1e5.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to neuroscience & cognitive science"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "BRAID: Input-driven Nonlinear Dynamical Modeling of Neural-Behavioral Data"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3vE4B61VSw | Accurate Split Learning on Noisy Signals | main | Active | Split Learning;Denoising techniques | optimization | 3;6;6 | 4;2;2 | 2;3;3 | 2;3;3 | 1;3;4 | 5 | 2.666667 | 2.666667 | 2.666667 | 2.666667 | -1 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. The denoising mechanism requires the addition of a tanh function. Could this cause performance degradation? Is it always applicable to general ML tasks?\n2. There are no empirical results for cut layers below L-2. Exploring these results could help elucidate the proposed methods' limitations.\n3. Could the authors provide more insight into why scaling does not mitigate FSHA effectively?\n4. Could the authors discuss the limited performance improvement of the denoising mechanism on Laplacian noise?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper provides a theoretical proof explaining how the proposed denoising methods reduce MSE under certain conditions, effectively denoising while preserving privacy guarantees. The paper covers both a simple identity layer and a more complex linear layer with non-linear activation, with implications for both classification and broader applications.\n2. It discusses how the two denoising methods approximate DP-SGD and contribute to privacy.\n3. Simulations are conducted for both linear and non-linear cases. Results indicate that the denoising methods exhibit different characteristics at varying noise scales, validating the theoretical claims and offering guidance on the application of each method.\n4. The authors conduct practical experiments across five datasets and various ML tasks, including image classification, recommendation, and language modeling. The results show significant improvements, with accuracy close to the clean model performance.\n5. The paper examines how hyperparameter settings (e.g., learning rate, weight decay, and optimizer choice) impact denoising performance in practical CIFAR-10/100 image classification tasks.\n6. Empirical studies indicate that the proposed method, combined with noise injection, effectively mitigates FSHA.\n7. Code is available for review and extensive discussion & extra experiments are available in appendix"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces two denoising techniques—scaling and random masking—for Differential Privacy (DP) within the Split Learning framework. These methods aim to preserve security guarantees while maintaining model accuracy. The authors focus on theoretical contributions supported by extensive simulation and empirical studies, demonstrating that the proposed techniques enhance the accuracy of split neural network classification under DP. Additionally, the paper shows that the resulting deep neural networks are resilient to state-of-the-art hijacking attacks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The theorem and its proof are limited to L-1 layers, which restricts the scope.\n2. While simulation largely supports the theoretical findings, discrepancies are observed between simulation and practical results. For example, scaling outperforms masking in simulation, though this does not hold consistently in practice.\n3. Performance degradation occurs when the cut layer is set below L-1.\n4. The paper does not explore hybrid applications of the two denoising methods.\n5. The empirical study tests only a limited set of noise-injection parameters (e.g., noise scale = 0.7). Exploring denoising limitations would be informative.\n6. The empirical study also uses a limited range of cut-layer settings.\n7. The paper evaluates only a limited range of attack types.\n8. The techniques are demonstrated only in a two-party split learning setup, despite their potential applicability to Split Federated Learning, which has broader real-world use."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1) The literature review could be expanded to include more recent studies, providing a broader context for the proposed techniques.\n2) Additional details on the experimental setup would enhance reproducibility and transparency, allowing other researchers to validate the findings.\n3) The paper could benefit from more comprehensive discussions on the limitations of the proposed methods and potential future work.\n4) A more detailed analysis of the impact of varying noise levels on training accuracy could provide deeper insights into the practical applications of the proposed techniques."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1) The paper addresses a critical issue in Split Learning regarding data privacy and accuracy, which is highly relevant in today’s data-driven environment.\n2) It introduces two novel denoising techniques—scaling and random masking—that show significant promise in improving training performance under noisy conditions.\n3) The theoretical analysis is well-supported by experimental results, enhancing the validity of the proposed methods.\nThe clarity of the writing and organization of the content facilitate understanding, making complex ideas accessible to a broader audience."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents innovative denoising techniques for Split Learning to enhance training accuracy while preserving data privacy against reconstruction attacks. The authors propose scaling and random masking methods, demonstrating their efficacy through theoretical and experimental results."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) The experimental validation may lack diversity in the datasets used, potentially limiting generalizability.\n2) Comparisons with existing methods could be more robust to highlight the advantages of the proposed techniques.\n3) The paper could benefit from clearer explanations of the denoising algorithms for readers unfamiliar with the underlying concepts."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. Is there any conclusion on the choice of parameters (i.e. \\lambda, p). Authors mostly use \\lambda=0.1 or 0.2, p = 0.1 or 0.2. How did author choose on this setting? Does the theoretical analysis provide guidance on how we should choose \\lambda and p?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The authors aim to demonstrate that the proposed denoising methods (scaling and random masking) improve test accuracy through both theoretical analysis and empirical evaluation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper discusses the use of scaling and random masking as a denoising method for noise-injected networks in the context of split learning. The authors present both theoretical and empirical evidence showing that applying these denoising techniques during the training phase improves testing accuracy. Additionally, the study demonstrates that the proposed denoising methods enhance the network’s resilience against feature space hijacking attacks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I believe the focus of the paper is misaligned with the target. First, I think the motivation was made clear: to protect models against reconstruction attacks, noise is often added to intermediate representations (IRs), which leads to a drop in accuracy. This creates a privacy-accuracy or robustness-accuracy trade-off. The primary goal of the paper should be to demonstrate that the proposed methods (scaling and random masking) improve this trade-off, but unfortunately discussions about this trade-off is completely missing.\n\nThe authors dedicate substantial space to showing that the denoising methods preserve accuracy when noise is injected. However, this seems unnecessary. It is intuitive that denoising in noise-injected networks would help maintain accuracy—a rather trivial observation. Variance scales as the square of the variable scaling, so scaling down naturally reduces variance. Similarly, adding random masking during training makes the network more robust to noise. This is intuitive and easy to understand. \n\nWhat the authors really want to show is that the proposed methods also helps in improve network privacy (or at least no degrade on it) so that a better trade-off become possible. This is critical but unfortunately, the authors spend so less effort on it. I expected to see quantitative measurements against multiple types of attacks, but only the feature space hijacking attack is covered, while other attacks mentioned in the introduction are ignored. Additionally, quantitative results are lacking. \n\nAs such, I feel authors fail to demonstrate a better accuracy-privacy trade-off, leading to their contribution unjustified.\n\nAdditionally, they are also minor issues in both theoretical and empirical study. In my understanding, the denoising methods are applied during the training, so with and without this it will lead to different weights. I don't think it is reflected in theoretical analysis. In the empirical analysis part, I don't think it is the best choice to show graphs of accuracy by training epoch as it takes a lot of space. Instead, I would expect to see much richer results from using different parameter settings."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024accurate,\ntitle={Accurate Split Learning on Noisy Signals},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3vE4B61VSw},\nnote={under review}\n}"
},
"abstract": {
"value": "Noise injection is applied in Split Learning to address privacy concerns about data leakage. Previous works protect Split Learning by adding noise to the intermediate results during the forward pass. Unfortunately, noisy signals significantly degrade the accuracy of Split Learning training. This paper focuses on improving the training accuracy of Split Learning over noisy signals while protecting training data from reconstruction attacks. We propose two denoising techniques, namely scaling and random masking. Our theoretical results show that both of our denoising techniques accurately estimate the intermediate variables during the forward pass of Split Learning. Moreover, our experiments with deep neural networks demonstrate that the proposed denoising approaches allow Split Learning to tolerate high noise levels while achieving almost the same accuracy as the noise-free baseline. Interestingly, we show that after applying our denoising techniques, the resultant network is more resilient against a state-of-the-art attack compared to the simple noise injection approach."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Split Learning",
"Denoising techniques"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/7ea82d9fc9b7d035076a18775693a1507cb599d4.pdf"
},
"presentation": null,
"primary_area": {
"value": "optimization"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/58baa0cb9796821cb4c90445d312589b76397971.zip"
},
"title": {
"value": "Accurate Split Learning on Noisy Signals"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3vSN5Oumob | Revised NTK Analysis of Optimization and Generalization with Its Extensions to Arbitrary Initialization | main | Active | neural tangent kernel;optimization;generalization | optimization | 1;3;5;6 | 5;5;3;3 | 2;1;3;3 | 2;1;2;3 | 1;2;2;1 | 3.75 | 4 | 2.25 | 2 | 1.5 | -0.911322 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Please look at the weaknesses."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The work provides a list of theoretical results. I can only check some of them and believe they are correct. \nIt proposes a more realistic setting for the NTK."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work focus on two problems: under what conditions the training error bound and the generalization error bound are obtained and go to zero in the NTK setting. The paper disproves the work of Arora and then provides proofs for better results, particularly for the case the initial values of training weights do not decrease with the sample size."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The crucial condition for Theorem 2.3 hold is $\\kappa = O(n^{\\alpha})$ with $\\alpha <0$, that only appears its proof in the Appendix. Based on the condition, the considered setting is very specific, that is the initial weights depend on $n$. Let $n$ go to infinity, the $\\|\\mathbf{y}\\|^2$ grows as $n$, the distance from initial weights to its desired destination increases as $n$ increases. So intuitively what is surprise from the result: the training weight cannot converge to the weight in which model has zero loss?\n\nIt is often that the paper states some theoretical results depending on some conditions, then those conditions happen under assuming another set of conditions? Could the author state everything together so that readers can check them all? For example, Proposition 3.1 states condition 1 and 2 hold, when requiring other conditions. In Proposition 3.1, $\\epsilon$ appears, does $\\epsilon$ affect the conditions on $m, n$ and other parameters in the proofs of other theoretical results? \n\n\nThe paper is very long, full of technical, not well-organized. There is no explanation for intuition behind the proofs and the proof's structure. For example, equation (24) appears at the beginning of the Appendix's proof refers to equation (138), which is at 20 pages later. It is not easy for reader to read 50 pages of proof with that presentation. It is unfair to reject the paper just due to poor presentation, but it makes reader hard to verify technical results to be certain that they are all correct. \n\nIn short, could the authors make a table comparing this work and the work of Arora in term of conditions and results and then highlight the technical advancement of this work, also give a discussion about all related parameters in one. Since for this kind of theoretical work, the most difficult task is to be certain that all conditions do not conflict each other and cover a wide range of cases. \n\nMinors\n1. Unconventional notation $\\{i\\}$ for the set $\\{1,2,\\ldots, i\\}$\n2. Math notations are not consistent, not defined, $\\Omega(1), O(n), \\Omega(n), \\Theta(n)$, $o(1)$ with respect to $n$.\n3. I could not find the proof of Proposition 3.1???\n4. Line 375, how large is $\\alpha$ acceptable, since $m$ depends on $n$ in other results?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "I would like the authors to response to the weakness section."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "- The orginaztion of the paper is clear and easy to read.\n- Revising the impact of initialization to the generalization error bound is an interesting problem and extends the existing theory of neural tangent kernel."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper revises the Neural Tangent Kernel (NTK) in analyzing the generalization performance of neural networks at the infinite width limit, focusing particularly on the impact of the scale $\\kappa$ of initialization over the bounds for optimization and generalization. Showing that the error bounds in the previous works Arora et al. (2019a;b) actually does not hold in the case $\\kappa = o(1)$ which is necessary for the bound to be non-vanishing. Then, this paper revises the previous result, establishing optimization and generalization bounds independent of $\\kappa$. Basing on the new error bounds, the paper discusses the generalization error bounds with arbitrary initialization. Finally, numerical simulations are provided to verify the theory."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "### Correctness\n\nOne major problem is the correctness of the results in this paper. \nIn the proof of Lemma 4, the paper applies the Markov’s inequality for $\\mathbf{a}$ with fixed $\\check{W} \\in \\Gamma$ to prove Eq. (41), and then use (41) for $W(k)$ subsequently, which is shown to lie in $\\Gamma$. However, since $W(k)$ is random and dependent on $\\mathbf{a}$, there is no guarantee that Eq. (41) holds for $\\check{W}$ replaced with $W(k)$. To make this approach work, a uniform version of Eq. (41) over $\\Gamma$ is needed. Given that Lemma 4 is not well justified, the main results in this paper are not well-supported.\n\n### Novelty\n\nThe results regarding $\\kappa = \\Theta(1)$ are not novel and are slight modifications of the existing results. While the proof seems to be long, they are mostly minor revisions of the existing proofs.\nAlso, this paper considers the simplest setting of a two-layer ReLU network with only the first layer trainable. However, as existing NTK theory (for example, [1]) can deal with more general settings such as multi-layer networks, extensions to these settings should also be considered.\n\nMoreover, regarding the generalization error bounds, this paper misses some related literature studying the NTK regime in terms of kernel regression [2,3], where sharper bounds are established and also do not depend on the scale of initialization (as the NTK in this setting does not depend on the scale).\nI think a more detailed comparison with the existing works should be provided.\n\n\n### Minor\n\nSome notations are used without definition and are not consistent. For example, $H^*$ is used in Section B.2 but defined in Section C, which seems to refer to $H^\\infty$ in the main text.\n\nWhen reviewing this paper, I also find a very recent paper concerning the impact of initialization [4]. A comparison with this paralleling work would be benefitial to the readers.\n\n### References\n\n[1] Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via over-parameterization. In International Conference on Machine Learning, pages 242–252. PMLR, 2019.\n\n[2] Namjoon Suh, Hyunouk Ko, and Xiaoming Huo. A non-parametric regression viewpoint: Generalization of overparametrized deep relu network under noisy observations. In International Conference on Learning Representations, 2021.\n\n[3] Tianyang Hu, Wenjia Wang, Cong Lin, and Guang Cheng. Regularization matters: A non-parametric perspective on overparametrized neural network. In International Conference on Artificial Intelligence and Statistics, pages 829–837. PMLR, 2021.\n\n[4] On the Impacts of the Random Initialization in the Neural Tangent Kernel Theory, Guhan Chen, Yicheng Li, Qian Lin. https://arxiv.org/abs/2410.05626"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Could the authors provide a complete proof of Theorem 3.1?\n\nCould the authors clarify the formula numbering and proof structure in Appendix B in a future version?"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The revised NTK framework proposed in this paper solves the dependency problem between the NTK initialization scale and the sample size. In addition, the numerical experiments in this paper support improving the proposed regularization method in improving generalization performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the optimization and generalization problems of over-parameterized neural networks, proposes a revised NTK framework that eliminates the dependency between initialization scale and sample size, and verifies its experimental results on benchmark datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The writing of the paper is not clear, especially in Appendix B, where the proof structure and formula numbering are very confusing and difficult for readers to understand. In addition, the lack of proof of Theorem 3.1 in the appendix makes it impossible to confirm the correctness of the theoretical results of the paper. Overall, the paper is not yet in a submittable state. I suggest authors complete any unfinished parts and thoroughly reorganize the paper to improve its clarity and readability before submitting it to a future conference."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "I note that the generalization upper bound in Thm 2.5 is \n\n$$O\\left( \\sqrt{ \\frac{2(y - u(0))^T (H^\\infty)^{-1} (y-u(0))}{n}} + \\sqrt{\\frac{\\log {\\frac{n}{\\lambda_0 \\delta}}}{n}}\\right),$$\n\nwhich is about $O(1)$ since $y - u(0)$ is a $n$-length vector. However, during these years, I have seen lots of recent work which provide tighter upper bounds for generalization error of networks based on NTK (e.g., $O(n^{-\\frac{d+1}{2d+1}})$,[Suh et al., 2021], [Hu et al., 2021],), which shows that maybe $O(1)$ is not tight enough. At the very least, an upper bound on the generalization error that does not decrease with increasing sample size n is hard to consider as tight. I think maybe it is a better choice to compare the result with theirs.\n\nIn Table 1, I see that this paper compared \"Original CMD\" with \"Revised CMD\", which is generally the same (e.g., 0.5998 with 0.5997). Table 1 also compares the generalization upper bound, which is common due to the removal of the initialization scale factor. So I am slightly confused by the meaning of Table 1. In Figure 2, I think the experiment should represent more details to make Figure 2(b) reliable (e.g., the stopping time of training). \n\nAdditionally, I don't understand the y-axis of Figure 2(c). I thought CMD is time-invariant and should not change with training, so I am confused about what the y-axis of Figure 2(c) represents. I would appreciate it if you could correct my misunderstanding."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This paper extends the work of Arora et al. 2019, removing the dependency of initialization on sample size. The authors provide a practical approach that aligns with real-world initialization schemes. This contribution enhances the applicability of NTK theory in real-world scenarios, and can be extended to future research."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper mainly builds on the results in Arora et al. 2019, focusing on the optimization and generalization of over-parameterized neural networks. The paper addresses a limitation in NTK-based analysis in Arora et al. 2019, which requires the scaling of initial parameters to decrease with respect to the sample size, a condition that contradicts practical initialization schemes. To resolve this issue, the authors try to removes the dependency of initialization on sample size, and extend to applying the method in real practice."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The generalization upper bound in Theorem 2.5, approximated as $O(1)$, may not be sufficiently tight compared to more recent works that provide tighter bounds, such as Suh et al. (2021) and Hu et al. (2021), which suggest a bound of $O(n^{-\\frac{d+1}{2d+1}})$. A bound that does not decrease with sample size $n$ raises concerns about its effectiveness. At the same time, the characterization of generalization capability in this paper is precisely based on this upper bound, which makes me feel that it is not solid enough.\n\n\n## References\n\n[Suh et al., 2021] Suh, N., Ko, H., and Huo, X. (2021). \"A non-parametric regression viewpoint: Generalization of overparametrized deep ReLU network under noisy observations.\" In _International Conference on Learning Representations_.\n\n[Hu et al., 2021] Hu, T., Wang, W., Lin, C., and Cheng, G. (2021). \"Regularization matters: A nonparametric perspective on overparametrized neural networks.\" In _International Conference on Artificial Intelligence and Statistics_, pp. 829–837. PMLR."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024revised,\ntitle={Revised {NTK} Analysis of Optimization and Generalization with Its Extensions to Arbitrary Initialization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3vSN5Oumob},\nnote={under review}\n}"
},
"abstract": {
"value": "Recent theoretical works based on the neural tangent kernel (NTK) have shed light on the optimization and generalization of over-parameterized neural networks, and partially bridge the gap between their practical success and classical learning theory. However, the existing NTK-based analysis has a limitation that the scaling of the initial parameter should decrease with respect to the sample size which is contradictory to the practical initialization scheme. To address this issue, in this paper, we present the revised NTK analysis of optimization and generalization of overparametrized neural networks, which successfully remove the dependency on the sample size of the initialization. Based on our revised analysis, we further extend our theory that allow for arbitrary initialization, not limited to Gaussian initialization. Under our initialization-independent analysis, we propose NTK-based regularizer that can improve the model generalization, thereby illustrating the potential to bridge the theory and practice while also supporting our theory. Our numerical simulations demonstrate that the revised theory indeed can achieve the significantly lower generalization error bound compared to existing error bound. Also importantly, the proposed regularizer also corroborate our theory on the arbitrary initialization with fine-tuning scenario, which takes the first step for NTK theory to be promisingly applied to real-world applications."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"neural tangent kernel",
"optimization",
"generalization"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/e47cb047a5cbf65d99915fa2ee4231803e758a51.pdf"
},
"presentation": null,
"primary_area": {
"value": "optimization"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/c2ba95d4b5fbe5f12c3c458f009aaefae4d86c89.zip"
},
"title": {
"value": "Revised NTK Analysis of Optimization and Generalization with Its Extensions to Arbitrary Initialization"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3vXpZpOn29 | Machine Unlearning via Simulated Oracle Matching | main | Active | machine unlearning;data attribution;training data attribution;privacy | other topics in machine learning (i.e., none of the above) | 6;6;8 | 5;4;4 | 3;3;4 | 2;3;4 | 3;3;4 | 6.666667 | 4.333333 | 3.333333 | 3 | 3.333333 | -0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "# High-level questions\n1. Definition 3. Why is KLoM called KL-divergence of Margins? I couldn't find the reasoning.\n2. Line 321 and Alg A.1. Assuming that dataset $S$ has distinct datapoints, isn't $S_{finetune}$ the same as $S$, implying that Alg A.1 is simply matching the model's outputs to the oracle? Then I don't understand the novelty of Oracle Matching.\n3. Line 364. Why replace the first term with $\\beta$? Linear $\\beta$ is exactly the definition of the estimator $\\hat{f}$, which is only an approximation to $f_x$. It'd make sense to explicitly state that DM-DIRECT approximately simulates the oracle outputs. You could then argue that even this approximate simulator works well empirically.\n\n# Low-level questions\n1. Line 173. Is this estimator/datamodel only for a single $x$? Does this mean that distinct inputs might require distinct datamodels?\n2. Equation 2. What is the approximation in? Is it in some measure of distributions?\n3. Figures. Why are there multiple points on the plot for each legend entry?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The paper is written extremely well, and has a natural flow to it. When reading I thought of many questions and added annotations, only to find them answered in the next paragraph or section. E.g. the authors introduce Oracle Matching with the strong assumption of oracle access to data attributor $f^{oracle}$ (Section 4.2), and immediately discuss how to simulate such an oracle without such an access (Section 4.3). In Section 5 as well, the paper attends to a natural question readers could have: is oracle matching useful when the problem is easy to solve with gradient descent. This is good writing, and I appreciate the authors' efforts in putting themselves in the readers' shoes.\n\nIn sum, the Oracle Matching and Simulation methods are intuitive yet thoroughly tested on image classification tasks. These methods are original as well, and it is surprising to me that linear datamodels work so well empirically. I'd like to see this paper accepted, and the research directions it inspires."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper addresses the problem of _machine unlearning_ (removing the effect of a few training data points \"forget set\" on the model's outputs) with by reducing it to the problem of _data attribution_ (predicting the effect the training set on the model's outputs). With this, the paper proposes a meta-algorithm Datamodel Matching (DMM) that gets predictions from data attribution on all-but-forget set and finetunes the model to match the predictions. A new unlearning metric KL Divergence on Metrics (KLoM) is also introduced. Finally, the paper presents experiments on unlearning in image classification tasks, showing that DMM is better at unlearning and faster than naive-retraining on the all-but-forget-set."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "It is unclear that linear datamodels extend to other kinds of tasks, e.g. language modeling or regression problems. I believe this to be a major weakness of the paper. While linear datamodels lead to simple algorithms in this paper, the previous work [1] does not have a good argument for why linear datamodels work [1; Section 7.2]---in fact Figure 6 of [1] display imperfect matching using linear datamodels. It'd be useful to mention this limitation in this manuscript as well, and discuss the limitation's impact to machine learning.\n\n# Suggestions:\n1. Line 156. It'd be useful to the reader to add a citation on differential privacy, e.g. one of the standard works like [2].\n2. Line 176. $\\hat{f}$ should have output range in $\\mathbb{R}^k$ since the range of $f_x$ is in $\\mathbb{R}^k$. \n3. Line 182. \"show\" -> \"empirically show\".\n4. Definition 3. Write safe, $S_F$, and input $x$ explicitly in KLoM, otherwise KLoM$(\\mathcal{U})$ looks like KLoM of the unlearning function across _all_ safe functions and inputs. I'm curious why the authors wrote KLoM$(\\mathcal{U})$.\n5. Add a Limitations section.\n\n[1] Ilyas, A., Park, S. M., Engstrom, L., Leclerc, G., & Madry, A. (2022). Datamodels: Predicting predictions from training data. arXiv preprint arXiv:2202.00622.\n[2] Dwork, C., & Roth, A. (2014). The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4), 211-407."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. This question is regarding the GD baseline. Given a model $\\theta_{full}=A(S)$ trained on a full dataset S (including the retain set $S_R$), GD minimizes the loss on the retain set using gradient descent, starting from $\\theta_{full}$. Since $S_R$ was already in the dataset S that the model was trained on, this essentially involves further training on $S_R$, which could lead to overfitting without effectively forgetting $S_F$. This may explain why, in Figure 3, GD performs similarly to 'Do Nothing' and, on average, even worse. My interpretation is that further training on $S_R$ may not significantly alter a model that was sufficiently trained on S if the loss is strongly convex and could lead to overfitting (and hence degrading performance on the validation) in more complex landscapes. Could the authors clarify if there’s any specific benefit of GD for unlearning that I might be overlooking?\n\n2. The authors mention the possibility of having duplicates across the forget and retain sets on page 6 when discussing the drawbacks of Gradient Ascent. Given that $S_F$ and $S_R$ are sets and $S_R$ is defined as $S \\backslash S_F$, I don't see how this duplication is possible. Could the authors clarify this?\n\n3. Could the authors provide more interpretation of the results in Figure 2? How do you interpret the changes and fluctuations in the red and gray lines?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is well-written and easy to follow.\n\n2. It provides a solid introduction to Unlearning, offering a detailed overview of related work, recent advancements, and existing challenges. The motivation is well-described, and the flow effectively positions this paper within the broader field, helping readers understand its scope and contributions better. The use of data attribution to approximate the Oracle model is particularly compelling."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a new Unlearning algorithm called Datamodel Matching (DMM) to unlearn a forget set by fine-tuning a model that has been pre-trained on a larger dataset including the forget and retain set. They use predictive data attribution to approximate the oracle model---the one that is retrained from scratch on the retain set and hence has not seen the forget set at all. Predictive data attribution learns datamodels for each input x to simulate how a model trained on the retain set would behave on x. With this approximation, DMM then applies Oracle Matching to align the model's output distribution with that of the oracle. They introduce KLoM for measuring the unlearning quality and show empirically that their algorithm outperforms previous gradient-based algorithms, achieving a lower KLoM and quickly approaching oracle-level accuracy with only a fraction of the retraining time."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The experimental results, while supportive of the algorithm’s effectiveness, are somewhat limited in scope. The analysis focuses primarily on CIFAR-10 and an ImageNet subset (Living-17), with figures 1 and 3 illustrating outcomes only on these datasets. Expanding the experiments to include additional datasets would enhance the generalizability of the findings. On the Retain set, Oracle Matching gets close to the Oracle in CIFAR-10 but this is not the case with Living-17 which means that while the algorithm can reduce KLoM on the forget set, it does not work as well on the retain set. More discussion on this observation can improve the understanding of the algorithm’s limitations, and I'm interested to know how this observation extends to other tasks.\n\n2. The paper could benefit from clearer explanations and interpretations of the figures and baselines. The discussion around interpreting each figure is limited. Additionally, some of the baseline methods are not well-defined; for instance, SCRUB is introduced without a description, and GD (Gradient Descent) is not explained well until page 9 (also see my question on GD in the Questions).\n\n3. Minor typo: The RHS of equation (2) should be dependent on x.\n\n4. Calculating one $\\beta$ vector for each x in the dataset appears computationally intensive. Although the authors mention that this is a one-time process that amortizes over unlearning requests, further discussion on its computational cost relative to the unlearning phase would be helpful."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "line 246 - specify $\\text{safe}(S_F)$ - how do you know this safe(S_F)? How did you calculate this? \n\npage 6 - line 317 - this description is clear but what exactly distinguishes your approach from distillation. The comparison between your method and a distillation unlearning approach such as [Efficient Two-stage Model Retraining for Machine Unlearning](https://openaccess.thecvf.com/content/CVPR2022W/HCIS/papers/Kim_Efficient_Two-Stage_Model_Retraining_for_Machine_Unlearning_CVPRW_2022_paper.pdf)\n\nFigure 3 - For this evaluation, did you retrain the model and then computed the unlearned model using oracle matching?\n\nline 365: in this formulation, to create the datamodels for remaining data, you used the datamodel of whole dataset minus the datamodel of forget set. if my understanding is correct, doesn't it add the additional computation because you need to estimate two datamodels first. \nalso can you prove this formulation result in an exact or good estimation of datamodel for remaining data? \n\nline 374: the true oracle output - I am just worried about the closeness of the proxy to actual retain datamodel to the.\n\nline 404 - , the datamodel generalizes well to new forget sets in practice. very interesting how do you demonstrate this ?\n\nORACLE MATCHING FOR LINEAR MODELS: my understanding is that the quality of unlearning for the oracle matching heavily is influenced by the unlearning quality of approximated oracle model . it would be interesting to investigate the influence of that model."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The proposed approach is both innovative and highly impactful, offering substantial reductions in unlearning time. The authors demonstrate a profound understanding of unlearning methods based on gradient descent and fine-tuning. They delve into the challenges of these methods, particularly examining their impact on the model's performance after unlearning. Although gradient-based approaches are among the most effective unlearning methods, they often degrade the predictive performance of the model considerably—a drawback that the authors have thoroughly investigated and discussed.\n\nThe concept of Oracle Matching, combined with the use of DataModels, greatly reduces time complexity. This approach shows considerable promise for improving the efficiency of unlearning processes without compromising model performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces the concept of Oracle Matching to significantly reduce the time complexity of unlearning processes, achieving this in a fraction of the time required for traditional retraining or fine-tuning while maintaining model performance as close as possible to that of full retraining. The authors utilize the concept of \"DataModels\" to efficiently approximate a proxy for the oracle, leveraging this proxy within the Oracle Matching framework."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Grammatical errors:\n- Abstract - line 2\n- The abstract and introduction was written in rush. please \n\n\nEarly introduction of specific notions:\n- such as $S$, $S_R$ in the beginning of the paper without providing the backgrounds and clearly stating them, confuses the reader. (Intro - second paragraph)\n- line 74 - trained model $\\theta$\n\nAmbiguity:\n- line 49 - Simple models // what is considered to be a simple model?\n- line 59 - variety of empirical evaluations and benchmarks // what are these benchmarks? either needs to mention at least one of evaluation criteria, or rephrase the sentence.\n\nIncorrect Statement:\n- line 63 - , fine-tuning-based methods typically employ.... - incorrect, the simple fine tuning only focus on the remaining datapoints and fine tune the model on $S_R$. If there is a paper that conducts the fine tuning in the way you mentioned in this line, you need to point it out, but otherwise, check this paper [\"Model Sparsity Can Simplify Machine Unlearning\"](https://openreview.net/pdf?id=0jZH883i34)\n\nIncorrect notation:\n- line 152 - the notation for approximate unlearning is the same as exact unlearning.\n\nline 83 - Empirically, we find that .... - I was expecting to see a comparison between your method and unlearning distillation approach, but you didn't made that comparison."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "principled and practical algorithm for unlearning training data by leveraging recent advances in predictive data attribution"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024machine,\ntitle={Machine Unlearning via Simulated Oracle Matching},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3vXpZpOn29},\nnote={under review}\n}"
},
"abstract": {
"value": "Machine unlearning---efficiently removing the effect of a small \"forget set\" of training data on a pre-trained machine learning model---has recently attracted significant research interest. Despite this interest, however, recent work shows that existing machine unlearning techniques do not hold up to thorough evaluation in non-convex settings. In this work, we introduce a new machine unlearning technique that exhibits strong empirical performance even in such challenging settings. Our starting point is the perspective that the goal of unlearning is to produce a model whose outputs are *statistically indistinguishable* from those of a model re-trained on all but the forget set. This perspective naturally suggests a reduction from the unlearning problem to that of *data attribution, where the goal is to predict the effect of changing the training set on a model's outputs. Thus motivated, we propose the following meta-algorithm, which we call Datamodel Matching (DMM): given a trained model, we (a) use data attribution to *predict* the output of the model if it were re-trained on all but the forget set points; then (b) *fine-tune* the pre-trained model to match these predicted outputs. In a simple convex setting, we show how this approach provably outperforms a variety of iterative unlearning algorithms. Empirically, we use a combination of existing evaluations and a new metric based on the KL-divergence to show that even in non-convex settings, DMM achieves strong unlearning performance relative to existing algorithms. An added benefit of DMM is that it is a meta-algorithm, in the sense that future advances in data attribution translate directly into better unlearning algorithms, pointing to a clear direction for future progress in unlearning."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"machine unlearning",
"data attribution",
"training data attribution",
"privacy"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/344c7c5cbf130d5c09e10cafe4fd79478fd29985.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Machine Unlearning via Simulated Oracle Matching"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3viQDuclu0 | Memorisable Prompting: Preventing LLMs Forgetting False Positive Alarm | main | Active | Prompt-based task;Large language model;Memorisable Prompting for Data Annotation | foundation or frontier models, including LLMs | 1;1;3 | 5;3;3 | 1;1;2 | 1;1;1 | 1;2;1 | 1.666667 | 3.666667 | 1.333333 | 1 | 1.333333 | -0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. What does it mean by justified approach through the lens of probability? (ln.76). Please be more concrete.\n2. Please fix formatting errors in the related work section, the dataset section, section 5.1, etc. \n3. \"Given new inputs, ... refining out candidate sets and improve LLM's accuracy for subsequent generation.\" (ln. 178-179). Just to clarify, is this referring new inputs from D_small or D_large? Does that mean after each sample, you need to know the correctness of the produced output?\n4. line 209 ChatGPT typo. \n5. line 231 incomplete sentence. \n6. What is P? Are you using P as a random variable, a model or a notation for probability? What do you mean \"Assume P is the LLM\" (ln. 232)?\n7. Is X a single feature or a feature class? or a variable? It is sometimes referred to as a features and other times as features (Section 4.1), but in many equations X is referred to as a variable (e.g., X=x_1 in Eq. 5).\n8. Section 4.3: What is K, what is a category? When is the concept of category introduced? What is the difference between a category and a label? Is Y a category or a label?\n9. Undefined symbols: What is $K$, $c$, $\\vec{Y}_G$, $q$, $\\vec{y}$?\n10. Is $Y_i'$ a vector or a value? It is referred to as a vector in lines 305-306 but a value 1 in line 313. Additionally, could you clarify what does it mean by \"If $Y_i'=1$, the corresponding potential candidate set is the first row of $\\vec{Y}_{\\text{Updated}}$\"? Does the value of $Y_i'$ correspond to \"first row\" of the matrix? Why?\n11. \"i.i.d. sample from the large size dataset\" (ln.321) --> do you mean uniformly sample?\n12. Table 1. Bolding is mentioned in the caption but no values are bolded in the table.\n13. \"The problem with this method is its dependence on multiple sources of paths; even slight changes in one source’s prediction can drastically impact the final prediction.\" (ln. 374-375). Why is this the case? Self-consistency takes the mode of multiple reasoning paths. Furthermore, the \"using a single query minimises the uncertainty\" (ln. 371-372) is not well-justified.\n14. All of the figures and Table 1 are not referred to in the text of the paper."
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The main idea of the paper, which states that memorizing false positive samples would help prediction performance, is interesting and can be a good direction for improving test-time techniques to improve model performance. The datasets and baselines examined in the experiments section is fairly comprehensive."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The main motivation behind this paper is memorizing past mistakes would make LLMs not make the same mistakes again. The authors propose Memorizable Prompting (MP), which allows LLMs to understand response dependence patterns and store them in a memory bank to prevent repeating false positive predictions. The memory bank is constructed using a small labeled set of samples."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The justification for the Memorization masking assumption is confusing (line 225-234). Please lay out the specific constraints assumed made by this claim, and deduce the final assumption. If I understand correctly, this assumption is claiming that assuming if the model has learned the characteristics of a category well enough, then it should reliably predict the same Y' distribution for each Y regardless of the input feature X. The current justification is not clearly or rigorously written and need further editing.\n\nSection 4 is hard to follow and not rigorously written. X and Y are sometimes referred to as variables and some other times as values/examples. For example, equation (4) does not \"simplify\" to equation (5) when the given assumption (ln. 258) is Y=1 for a given X=x1; this is only correct if the given assumption is X=x1 and Y=1. \n\nSection 4.3. It is not clear how M is used in the transformation from $\\vec{Y}$_Query to $\\vec{Y}$_True. Additionally, \"Our goal is to design a prompting scheme to enable LLMs to generate the correct annotation for each q from the corresponding $\\vec{y}$\" (line 303-304). What exactly is the prompting scheme? It is not discussed or included in the appendix.\n\nOverall, there are fundamental flaws in the probabilistic framework and problem formulation. The core mathematical error lies in the proposed marginalization over Y: $P(Y'|X,\\vec{Y}) = \\sum_Y P(Y'|Y,X,\\vec{Y})P(Y|X,\\vec{Y})$. This equation is fundamentally incorrect as Y represents a ground truth label, not a random variable. The main objective to \"obtain\" $P(Y|X,\\vec{Y})$ through $P(Y|Y',X,\\vec{Y})$ shows a fundamental misunderstanding of the causal relationship in supervised learning. THe prediction Y' should not influence the probability of the true label Y. The learning objective should instead be formulated as maximizing the probability of correct classification, $\\arg\\max_G P(Y=y^*|X,Y)$. These issues reflect a fundamental misunderstanding of probabilistic modeling, making the proposed method mathematically unsound. While the research direction may be promising, the current formulation requires substantial revision to establish a valid theoretical foundation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Refer to Weaknesses"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The motivation of this paper is explicitly to make the LLMs remember the dependencies of responses and prevent making false positive answers."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The entire paper is more of a technical report in that it improves upon the prompt strategy, which aims to make the LLMs remember the dependencies of responses and prevent making false positive answers."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The entire paper is more of a technical report in that it improves upon the prompt strategy; however, it is difficult to read the entire paper. Even after reading the whole paper, I don't even know what kind of prompt form or few-shot form the authors employ.\n\nThe motivation of this paper is explicitly to make the LLMs remember the dependencies of responses and prevent making false positive answers. However, there are many fundamental errors in both model construction and experimental setups.\n\nIn method construction:\n\n1. The paper is very redundant to introduce the basic definition, and there is a lack of formal definition of basic memorization matrix in preliminaries. \n\n2. The introduction of the method is not clear, I think only two points need to be introduced clearly:\n\n2.1. How is the memorization matrix constructed between the answer and the truth to each query when the truth cannot be obtained during reasoning? How many times does each query need to be answered, and how is the sparsity of the matrix addressed when there are too many categories?\n\n2.2. Does the LLM actually capture dependencies between responses from the matrix? How does the LLM remember and avoid false positive responses? Are there any experiments to prove it?\n\n3. In Section 4.0, the paper introduces the learning conditions, it is very confusing how the sample features are introduced in the method, for example in Eq.3, how to find completely different features that can make the LLM generate the same prediction? I don't think any specific distinct feature alone can do that.\n\n4. Is it using the memorization matrix of few-shot hint samples to predict samples with no truth labels? What is the number of hint samples for each sample? **How to determine the LLM is not mimicking the memory matrix, but actually recording the response to each query.**\n\nIn related work:\nThe paper does not fully investigate the related work and has no logic in expression. The paper is based on LLM reasoning and constructs a strategy to improve prompt.\n\nIn the experiments:\n\n\n1. The paper **is lack of experimental setup**, and prompts should be introduced in detail, otherwise it is difficult to reproduce the work.\n\n2. There is too little experimental analysis in the paper, and the experimental Tables and Figures are not clearly introduced. Besides, the paper adopts three LLMs. Is there no difference in performance among the three LLMs? Meanwhile, the paper lacks interpretability experiments to validate the motivation.\n\n3. References to Figures1, 2 and 3 and Tables 1 and 2 are absent from the main paper. There are also several methods that are not detailed in baselines. What is used for correction? What is fot in the Tables? **I think the paper should at least be standardized and clear.**\n\n**There are multiple errors in the paper**, multiple quotes, punctuation, basic grammar errors, **the readability of the paper is poor**, I think the author should take the paper submission seriously."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Here is a revised version of your questions for the article:\n\n- Line 120: Why not simply use $(X, Y, \\mathcal{Y})$ or $(X, Y)$ here?\n\n- Line 132: Is the concept of a \"transition matrix\" derived from Markov models? If so, each row in M should sum to 1. If not, consider using a different term or providing clarification to prevent potential confusion.\n\n- If I understand correctly, Section 4 (Lines 184--241) essentially states that the smaller dataset has a similar distribution to the larger dataset, and thus the matrix M estimated from the smaller set can be applied to the larger set under certain assumptions. Why is such an in-depth discussion on conditional terms necessary if they are simply assumed valid? It seems the numbered equations and many unnumbered ones are not essential, as this is standard machine learning knowledge familiar to the intended audience.\n\n- Line 276: Certain GPT API versions can generate token probabilities alongside tokens. Therefore, the reasoning provided here is not strong enough to justify the innovation in this paper. Additionally, relevant studies should be cited in the related works.\n\n- Line 288: What is the difference between $D_{\\text{hint}}$ and $D_{\\text{small}}$ mentioned earlier? How many data points were used to compute matrix M in the experiments? It is mentioned later that $s=4$. If it accounts for 5% of the total data points, the total number of data points for each dataset should be around 80, which does not match the number in Table 2."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The strength lies in its approach to improving LLMs' reliability by addressing false positives through Memorisable Prompting (MP), which adds a layer of memory to the model's prompting strategy. This technique enhances model consistency by leveraging a memory bank that \"remembers\" past errors, allowing the LLM to self-correct based on learned dependencies. The approach is versatile, effectively improving accuracy across different domains and integrating well with existing prompting methods. Furthermore, the use of a memory masking matrix offers a structured way to manage and apply learned error patterns, adding a new dimension to LLM error management in a practical, adaptable manner."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a method to improve the accuracy and consistency of Large Language Models (LLMs) by enabling them to \"remember\" past errors. It uses small annotated datasets, or hint samples, to learn dependencies between predictions and actual labels, storing this knowledge in a memory bank to help LLMs avoid repeating false positives. By applying a memory masking matrix, MP enhances prediction accuracy across domains and integrates well with various prompting techniques, making it effective for error-prone, high-stakes applications."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The article is difficult to follow due to inconsistent terminology, unclear mathematical notation, grammatical errors, structural issues, and logical gaps, which detract from its clarity and coherence. Overall, the writing does not reflect careful attention to readability and precision.\n\n- The proposed approach appears limited in applicability, focusing primarily on classification tasks. However, this limitation is not addressed in the paper, leading to an incomplete understanding of the method's scope.\n\n- The narrative includes redundant explanations and lacks depth, limiting the reader’s engagement with the insights behind the proposed approach.\n\n- Baseline comparisons are insufficient; it remains unclear why traditional weak supervision methods, such as [1], combined with pre-trained language models were not included for a more comprehensive evaluation of the method.\n\n- The proposed approach shows minimal innovation compared to prior work using transition matrices for prediction calibration, which reduces the originality and significance of the contribution.\n\n[1] Ren, Wendi, et al. \"Denoising multi-source weak supervision for neural text classification.\" arXiv preprint arXiv:2010.04582 (2020)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024memorisable,\ntitle={Memorisable Prompting: Preventing {LLM}s Forgetting False Positive Alarm},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3viQDuclu0},\nnote={under review}\n}"
},
"abstract": {
"value": "Large Language Models (LLMs) are widely recognized for their superior performance across various domains. However, their tendency to generate inaccurate or misleading responses presents significant challenges, particularly in the natural language domain. This issue underscores the need to enhance both the explainability and reliability of LLMs. While recent advancements in prompting have focused on leveraging in-context learning—such as providing step-by-step explanations—these approaches often overlook the critical importance of understanding the response dependency of LLMs on specific datasets. This understanding is crucial for interpreting their outputs and improving their consistency. Moreover, if we can capture and encode these response dependencies, we can integrate them into LLMs as memorized knowledge to mitigate false positive predictions over time. In this paper, we tackle this challenge by introducing the Memorizable Prompting (MP) paradigm, which enables LLMs to retain and utilize information from past responses. Specifically, our approach leverages hint samples—a small set of annotated examples—to learn the response dependencies, defined as the relationship between LLM outputs and the ground-truth annotations for a given dataset. This equips LLMs with the ability to recall past false positives and use that knowledge for self-correction in future predictions. We have evaluated our method on a diverse set of domain-specific datasets, demonstrating its effectiveness across large-scale benchmarks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Prompt-based task",
"Large language model",
"Memorisable Prompting for Data Annotation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/af5cec55857e0a7ea1b04d7e94efbcc218e2bfa3.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Memorisable Prompting: Preventing LLMs Forgetting False Positive Alarm"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3vxfFFP3q5 | VOVTrack: Exploring the Potentiality in Videos for Open-Vocabulary Object Tracking | main | Active | Object Tracking;Open-Vocabulary | applications to computer vision, audio, language, and other modalities | 5;5;5;5;6 | 4;5;4;5;4 | 2;2;3;3;3 | 2;2;2;2;3 | 2;2;3;2;3 | 5.2 | 4.4 | 2.6 | 2.2 | 2.4 | -0.408248 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Considering CLIP's training process, is there a significant distinction in the representations output by CLIP when tracking-state-aware prompts, such as 'unoccluded' and 'occluded', are provided?\n2. Why does your method improve the ClA for novel categories compared to the baseline OVTrack but decrease scores for base categories? Does this suggest that your model introduces a conflict between tracking and classification? \n3. Could you include a visual representation and analysis of the prompt guided attention? This would more intuitively demonstrate its role and effect."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper offers a detailed explanation of the proposed method. Utilizing unlabeled data for self-supervision serves, to some extent, as an alternative to address the current scarcity of large vocabulary tracking data. \n2. The results show good performance in comparisons on the OVMOT benchmark."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces Tracking-state-aware prompt guided attention, enabling the network to learn the detection of objects in different tracking states. A self-supervised approach is adopted to train tracking, leveraging large-scale, unlabeled video data across various categories. The experimental results on TAO datasets indicate that the proposed method achieves advanced performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The CIsA for basic categories in your method is lower than that of the baseline method OVTrack. Given that basic categories account for the majority of targets, the decrease on CIsA appears to reflect more than just typical fluctuation effects. Could this imply that your model's approach introduces a degree of conflict between tracking and classification based on the baseline? \n2. There is a lack of visualization and analysis for the prompt guided attention, which does not adequately demonstrate its direct impact."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the concerns and issues raised in the \"Weaknesses\"."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This manuscript is standardized and the writing is fluent, and the content is easy to understand.\n\n2. This manuscript proposes a new tracking-related prompt-guided attention for the localization and classification (detection) in the open vocabulary tracking problem. This method takes notice of the states of the time-varying objects during tracking, which is different from the open-\nvocabulary object detection from a single image.\n\n3. This manuscript develops a self-supervised object similarity learning strategy for the temporal association (tracking) in the OVMOT problem. This strategy, for the first time, makes full use of the raw video data without annotation for OVMOT training, thus addressing the problem of training data shortage and eliminating the heavy burden of annotation of OVMOT.\n\n4. Experimental results on the public benchmark demonstrate that the proposed VOVTrack achieves the best performance with the same training dataset (annotations), and comparable performance with the methods using a large dataset (CC3M) for training."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces Open-vocabulary multi-object tracking (OVMOT), a significant challenge that involves detecting and tracking various object categories in videos, including both known (base classes) and unknown (novel classes) categories. The authors critique existing OVMOT methods for treating open-vocabulary object detection (OVD) and multi-object tracking (MOT) as separate modules, primarily focusing on image-based approaches. To address this, they present VOVTrack, which integrates object states relevant to MOT with video-centric training, approaching the challenge from a video object tracking perspective. VOVTrack features a prompt-guided attention mechanism that enhances the localization and classification of dynamic objects, and it employs a self-supervised object similarity learning technique for tracking using raw, unlabeled video data. Experimental results demonstrate that VOVTrack outperforms current methods, establishing it as a leading solution for open-vocabulary tracking tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The experimental tables lack details on model complexity. It would be helpful to include a table or section comparing FLOPs, parameters, model size, and FPS across the different methods evaluated, including the baseline OVTrack method and other state-of-the-art approaches."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Some important details and explanations should be provided for clarity."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "A novel method that integrates object states based on prompt learning is proposed to combine OVD and MOT for open-vocabulary multi-object tracking.\n\nSome self-supervised losses are designed to learning better object associations.\n\nExperiments are conducted on public dataset."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work presents a novel open-vocabulary multi-object tracking method that integrates object states based on prompt learning. Different from existing works, it combine OVD and MOT in a unified framework. Some self-supervised losses are designed to learning better object associations. Experiments on public dataset demonstrate the effectiveness of the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The following details are unclear. Are the designed prompts only used in training procedure? It would be better if visualized results are provided to validate the effectiveness of these prompts in handling challenging frames with occlusions or motion blur.\n\nThere is no comparison with works published in 2024, and the effectiveness of the proposed method is thus not fully validated. The relevent and recent trackers including both closed-set and open-vocabulary ones should be included in comparison.\n\nThe results suggest that the proposed method performs abnormal under ClsA metric in both result comparison and ablation study. Though it is better than all methods in most metrics, but it performs worse than OVTrack and OVTrack+RegionCLIP in a clear margin, and also worse than other methods in some cases. The reasons behind these results are not well explained.\n\nSome errors:\n\nLine 256: into ore framework model\n\nLine 456: Our method"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. The primary assertion of the paper is that existing OV tracking methodologies do not take into account the states of objects. However, the proposed approaches appear to be largely unrelated to tracking object states, and the introduced prompt, .e.g., occuded, and complete, seems overly naive in my view.\n2. The proposed classification method only focuses on the high-quality object. However, exclusively training the classifier on high-quality targets may result in the neglect of low-quality targets that are blurred or occluded. This seems inconsistent with the paper’s claim of addressing issues related to blurring and occlusion in object tracking. Furthermore, could this be the reason for the model’s lower classification performance on the base?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Competitive OV tracking performance on TAO.\n2. Self-supervised object similarity learning is compatible with unlabeled video data."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposes a series of improvements to the existing OV MOT framework (OVTrack), including a novel prompt-guided attention mechanism and a self-supervised object similarity learning method. With the support of additional video data, these enhancements achieve superior performance on the TAO dataset compared to OVTrack."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Novelty is limited: The overall architecture of the proposed OVOTrack is based on OVTrack. The specific implementation within the tracking-state aware prompt approach does not exhibit a clear association with the tracking state. Moreover, this method does not assess object quality specifically for tracking scenarios, as it remains overly limited to factors such as occlusion and blurriness. On the other hand, the self-supervised object similarity learning proposed in this work closely resembles QDTrack, with no significant differences observed.\n2. Writing requires further improvement: The overall writing is somewhat scattered and lacks coherence, with inconsistencies between the motivation of the paper and the proposed approach, raising concerns of possible over-claim. There is considerable redundancy in language, with some sections being overly simplistic while others are unnecessarily verbose. Additionally, certain assumptions and notations in the methodology are not rigorously standardized (e.g., line 218). Therefore, I believe there is substantial room for improvement in the writing. \n3. Experimental setting: The training process described in the paper involves four stages, which makes it overly complex and cumbersome. Compared to OVTrack, this work utilizes a substantial amount of additional TAO video data for training; however, the classification performance on the base categories in the TAO benchmark has declined. Moreover, the ablation study results indicate that the OVOTrack model consistently underperforms in classification metrics, yet no reasonable explanation is provided for this issue. The examples provided in the visualization results are overly simplistic, failing to showcase the model’s true capabilities."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. VOVTrack lacks of a clear definition, what's the full name?\n2. For Table 1, the propsed VOVtrack performs worse than OVTrack in terms of ClsA metric. What is the reason behind it?\n3. For Table 2, it demonstrate the effectiveness of Prompt-guided attention. But the baseline with the self-supervised association module fail to obtain obvious improvements, so that the contribution can not be effectively convinced."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This paper propose a new tracking-related prompt-guided attention for the localization and classification in OVMOT.\n2. The self-supervised learning strategy leverages unlabeled video data for the temporal association, addresses the challenge of training data shortage.\n3. Extensive experimental results, demonstrating that VOVTracker outperforms existing methods on TAO benchmark."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed a new tracking-related prompt-guided attention for the localization and classification, and develops a self-supervised object similarity learning strategy for the temporal association in OVMOT. Experiments demonstrate that the proposed method achieves state-of-the-art tracking performance compared to previous methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Lack of novelty. The self-supervision is common and there are some considerable work in vision tasks. However, this paper just applies it to OVMOT, and does not compare the proposed method with previous methods. What's the main difference and contributions for OVMOT.\n2. While VOVtrack leverages unlabeled video data, its performance seems to depend on the quality and representativeness of the videos, which could affect its robustness across different real-world conditions.\n2. The authors should conduct on the ablation sduty of different parameter settings. Such as parameters related to training and inference."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024vovtrack,\ntitle={{VOVT}rack: Exploring the Potentiality in Videos for Open-Vocabulary Object Tracking},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3vxfFFP3q5},\nnote={under review}\n}"
},
"abstract": {
"value": "Open-vocabulary multi-object tracking (OVMOT) represents a critical new challenge involving the detection and tracking of diverse object categories in videos, encompassing both seen categories (base classes) and unseen categories (novel classes). This issue amalgamates the complexities of open-vocabulary object detection (OVD) and multi-object tracking (MOT). Existing approaches to OVMOT often merge OVD and MOT methodologies as separate modules, predominantly focusing on the problem through an image-centric lens. In this paper, we propose OVTracker, a novel method that integrates object states relevant to MOT and video-centric training to address this challenge from a video object tracking standpoint. First, we consider the tracking-related state of the objects during tracking and propose a new prompt-guided attention mechanism for more accurate localization and classification (detection) of the time-varying objects. Subsequently,\nwe leverage raw video data without annotations by formulating a self-supervised object similarity learning technique to facilitate temporal object association (tracking). Experimental results underscore that OVTracker outperforms existing methods, establishing itself as a state-of-the-art solution for open-vocabulary tracking tasks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Object Tracking",
"Open-Vocabulary"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b3790ab36130cf84b397cc30608e6e3e3e271afa.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/093f817c88c4c3d3606970dd055647c0d620bfd0.zip"
},
"title": {
"value": "VOVTrack: Exploring the Potentiality in Videos for Open-Vocabulary Object Tracking"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3wEGdrV5Cb | Enhancing Federated Domain Adaptation with Multi-Domain Prototype-Based Federated Fine-Tuning | main | Active | Federated Learning; Federated Domain Adaptation; Federated Fine-Tuning | other topics in machine learning (i.e., none of the above) | 5;6;6 | 4;4;3 | 3;3;3 | 1;3;2 | 3;3;3 | 5.666667 | 3.666667 | 3 | 2 | 3 | -0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The experimental settings are somewhat vague, particularly regarding whether competitive approaches, such as FedAvg and FedProx, are trained from scratch or from the same pretrained backbone model as the MPFT. Could the authors specify this information in the paper, ideally in the experimental setup or implementation details section?\n\n2. The privacy protection measures appear informal. While Gaussian noise is added to prototypes (i.e., embeddings) as noted in Section 5.6, it is important to assess whether the noise level provides adequate privacy protection for sensitive attributes or labels. To strengthen this section, could the authors provide a more rigorous analysis of the privacy guarantees associated with their approach? For example, quantifying the level of privacy protection using established metrics or frameworks would be beneficial."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1, Experiments on two datasets indicate a significant performance boost from the proposal, increasing accuracy by 2%.\n\n2. The approach is intuitive, sharing prototypes using three sampling strategies.\n\n3. The proposal is communication-computationally efficient, as it involves a low number of interaction rounds and requires only prototypes for communication."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work studies domain adaptation in federated learning scenarios, employing prototype-based fine-tuning to leverage knowledge from other clients. The fine-tuning procedure requires only one round of communication between clients and the server, making it communication-computationally efficient."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The motivating scenario may be impractical, as the proposal assumes that clients already possess a well-pretrained model. This assumption can be particularly challenging in sensitive areas where federated learning is essential, such as healthcare or finance, where data privacy concerns limit access to robust pre-trained models. \n\nCan the authors discuss specific applications or scenarios within these sensitive domains where obtaining a pre-trained model might be more feasible? Additionally, it would be valuable to explore potential solutions or adaptations of their method for situations where a pre-trained model is not available."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I suggest that the authors remove the overly powerful pre-trained feature extractor and instead use a simpler, more foundational model for feature extraction, such as ResNet or ConvNet. This would more accurately showcase the contribution of the prototype and methodology itself.\n\nThe authors should provide a validity proof to demonstrate that training the adapter on aggregated prototypes achieves comparable performance to training on aggregated raw client data or features—or, alternatively, establish an upper bound on the performance gap between these approaches.\n\nAlthough differential privacy (DP) experiments were conducted, a theoretical analysis is also necessary. Specifically, the authors should clarify the noise variance conditions required for their method to satisfy DP. In particular, the statement “Furthermore, we observe that specific noise configurations can reduce bias across heterogeneous datasets, enhancing the robustness of prototype data” requires a more robust theoretical explanation.\n\nI would encourage the authors to further explain, both theoretically and experimentally, the differences, advantages, and potential complementarities among various prototype selection mechanisms."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The main strength of the paper lies in its comprehensive experimentation, which provides a detailed view of MPFT’s performance across various scenarios, showing superior computational and communication efficiency."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes Multi-domain Prototype-based Federated Fine-Tuning (MPFT), a framework designed for Federated Domain Adaptation (FDA) by utilizing domain-specific prototypes. Instead of relying on traditional model aggregation techniques, which often falter due to data heterogeneity, MPFT transfers prototypes (compressed domain representations) to the server. This approach allows the server to learn a global adapter that improves both in-domain and out-of-domain performance. MPFT also incorporates differential privacy to protect prototypes from potential data leakage."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The Multi-domain Prototype-based Federated Fine-Tuning (MPFT) method proposed in this paper is overly simplistic, primarily relying on basic prototype sampling strategies—such as mean, cluster, and random sampling—to generate client prototypes. Additionally, because the model does not undergo training on each client’s data, it fundamentally contradicts the original design principles of federated learning (FL). The performance gains observed in the experiments likely stem from the strong pre-trained feature extractor, which does not require training on all client data but only a few representative prototypes to achieve reasonable performance. If the pre-trained feature extractor were removed and the model was trained solely on prototypes, the results would likely be poor. If only prototype fine-tuning is needed, then why even employ a federated learning framework? Thus, despite achieving certain results, this approach introduces no complex mechanisms or innovative architectures to enhance the FL system, lacking fresh or original design. It merely follows a standard process: feature extraction with a general pre-trained model, prototype selection, server training, and client fine-tuning, without presenting any real novelty.\n\nThe paper does not provide a thorough theoretical analysis of the different prototype sampling methods, nor does it uncover the fundamental differences between these sampling strategies in handling heterogeneous data distributions. Although the paper presents experimental results comparing mean, cluster, and random sampling methods, it lacks detailed explanations on the core differences among these strategies in terms of learning mechanisms, communication efficiency, and personalization effects. Consequently, the paper falls short in theoretical depth, failing to provide any deep insights into the impacts of these sampling strategies.\n\nThe theoretical analysis in this paper primarily focuses on convergence. However, training the adapter on aggregated prototype data is no different from centralized data training, so convergence is naturally expected. To provide valuable theoretical insights, the paper would need to show that the adapter’s performance on aggregated prototypes is comparable to that on aggregated raw client data or offer theoretical performance bounds. Furthermore, transmitting prototypes instead of models introduces a higher risk of privacy leakage, so the analysis should prioritize differential privacy (DP) guarantees rather than focusing solely on convergence. The current paper fails to address theoretical guarantees for privacy protection, which is particularly critical in privacy-sensitive federated learning contexts.\n\nAlthough the experimental section is comprehensive, covering multiple datasets, various sampling strategies, and different DP configurations, it does not clearly reveal the functional differences and applicability of each prototype sampling approach across different scenarios. While the experiments test different prototype configurations, they lack an in-depth discussion on how these configurations perform under different data distribution patterns and sample sparsity conditions. This results in a set of experiments that, while extensive, fails to provide a robust summary of the unique characteristics of the prototype methods, thus missing an opportunity to elevate the research value."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Would the authors explain how you chose the parameter $s$ in Section 5.6? \n\n2. Why is the class $k$ missing in the proof in Appendix B (while formulated before proof)?\n\n3. Can authors make more comparisons with recent (>=2023) works? For example,\nFedcp: Separating feature information for personalized federated learning via conditional policy. In KDD 2023.\nFedFed: Feature Distillation against Data Heterogeneity in Federated Learning. In NeurIPS 2023.\nFedgh: Heterogeneous federated learning with generalized global header. In MM 2023.\n\nIt would be better to compare aggregation-based improvements in recent works and the new mechanism proposed in this paper. Since the authors stressed the drawbacks of these works, it would be better to give detailed support or explanation."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "+ They propose an interesting idea -- MPFT as a one-round federated fine-tuning approach for multi-domain environments, demonstrating its performance improved over previous methods. \n+ A new metric is introduced to evaluate model adaptability, assessing both out-of-domain and in-domain accuracy to balance knowledge retention and domain adaptation.\n+ The writing and logic are clear and easy to follow. The problem formulation is clear and friendly to readers."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces Multi-domain Prototype-based Federated Fine-Tuning (MPFT), a framework designed to address data heterogeneity and data privacy in federated learning (FL) by using prototype-based training rather than traditional averaging methods. In MPFT, each client generates a set of representative data embeddings (prototypes) that capture essential domain-specific characteristics without transferring raw data. These prototypes are then aggregated at the server, allowing for a simulated centralized learning approach and enabling fine-tuning of a global adapter. This method aims to achieve performance on par with centralized learning while solving the challenges incurred by aggregating client models.\nThe efficiency of MPFT lies in that it requires only a single round of global communication, significantly reducing computational and communication costs compared to multi-round FL methods. Furthermore, by selectively sampling prototypes, the framework limits data transfer volumes. To ensure privacy, MPFT integrates differential privacy mechanisms, mitigating risks of data exposure and rendering prototype-based data reconstruction ineffective—even when the prototype encoder is known to potential attackers."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The proof of convergence in Section B lacks the differential-privacy-related analysis. A supplemental analysis of \"MPFT with DP\" should be added for theoretical completeness. Intuitively, the addition of differential privacy introduces a bounded randomness during the convergence process.\n\n- The selected experimental counterparts for comparison are mostly in 2017-2022, which are relatively not new. In Section 2, the authors think the \"averaging-based aggregation results in poor out-of-domain adaptation performance.\" It would be better to show the performance of out-of-domain adaptation from the recent advances atop model/parameter aggregation. Some recent works (with or without aggregation) relevant to data heterogeneity in federated learning are expected to compare, such as,\n\nFedcp: Separating feature information for personalized federated learning via conditional policy. In KDD 2023.\nFedFed: Feature Distillation against Data Heterogeneity in Federated Learning. In NeurIPS 2023.\nFedgh: Heterogeneous federated learning with generalized global header. In MM 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024enhancing,\ntitle={Enhancing Federated Domain Adaptation with Multi-Domain Prototype-Based Federated Fine-Tuning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3wEGdrV5Cb},\nnote={under review}\n}"
},
"abstract": {
"value": "Federated Domain Adaptation (FDA) is a Federated Learning (FL) scenario where models are trained across multiple clients with unique data domains but a shared category space, without transmitting private data. The primary challenge in FDA is data heterogeneity, which causes significant divergences in gradient updates when using conventional averaging-based aggregation methods, reducing the efficacy of the global model. This further undermines both in-domain and out-of-domain performance (within the same federated system but outside the local client), which is critical in certain business applications. To address this, we propose a novel framework called \\textbf{M}ulti-domain \\textbf{P}rototype-based \\textbf{F}ederated Fine-\\textbf{T}uning (MPFT). MPFT fine-tunes a pre-trained model using multi-domain prototypes, i.e., several pretrained representations enriched with domain-specific information from category-specific local data. This enables supervised learning on the server to create a globally optimized adapter that is subsequently distributed to local clients, without the intrusion of data privacy. Empirical results show that MPFT significantly improves both in-domain and out-of-domain accuracy over conventional methods, enhancing knowledge preservation and adaptation in FDA. Notably, MPFT achieves convergence within a single communication round, greatly reducing computation and communication costs. To ensure privacy, MPFT applies differential privacy to protect the prototypes. Additionally, we develop a prototype-based feature space hijacking attack to evaluate robustness, confirming that raw data samples remain unrecoverable even after extensive training epochs. The complete implementation of MPFL is available at \\url{https://anonymous.4open.science/r/DomainFL/}."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Federated Learning; Federated Domain Adaptation; Federated Fine-Tuning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/369006c3cf03bed575c9a067c733ba3efe1b3ae6.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Enhancing Federated Domain Adaptation with Multi-Domain Prototype-Based Federated Fine-Tuning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3wrMRYuLlQ | On the Language of Thoughts in Large Language Models | main | Active | Language models;system 2 reasoning;language of thoughts | foundation or frontier models, including LLMs | 1;5;5;5 | 4;3;4;4 | 1;2;2;2 | 1;3;3;1 | 1;3;3;2 | 4 | 3.75 | 1.75 | 2 | 2.25 | -0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- the last part of Definition 3.5 (\"Otherwise, ...\") seems to be repeated.\n\n- the abstract on OpenReview does not match the PDF abstract; could the authors clarify?\n\n- could the authors explain how worst-group accuracy is computed and how this leads to the conclusion that \"current language models struggle to properly utilize given premises for reasoning\"?\n\n- could the authors provide more detail on the different prompting strategies used in Section 4.1?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- the paper is well-written and includes extensive results across a diverse range of datasets and LLMs.\n\n- the analysis of language modelling bias and the language-thought gap offers an interesting perspective within the LLM community."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper analyzes language modelling bias and the language-thought gap in the context of LLMs. It proposes a new prompting technique, LoT, to address these issues and evaluates its effectiveness across two bias benchmarks and eight general reasoning benchmarks, using six different LLMs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- the construction of LoT does not appear fully derived from the analysis of language modelling bias and the language-thought gap (which contribute to a big part of this work). In other words, it seems that LoT is not a natural product/derivation from the bias analysis, e.g., how does LoT overcome the issue that \"one piece of information can have different expressions in language\"? LoT’s design choices, such as \"expanding thoughts,\" seem empirically beneficial but not systematically motivated by the initial analysis. Additionally, the claim in Section 5.2 that \"the expansion prompt may exacerbate language modelling biases\" potentially undermines the rationale behind this feature.\n\n- the performance of the \"echo only\" prompt, which frequently outperforms LoT as shown in Table 3, highlights the need for a deeper understanding of LoT's effectiveness (relating to the above concern). Although the work presents comprehensive testing of LoT across datasets and LLMs, readers would benefit from insights into the underlying mechanisms that make this prompting design work—or fail.\n\n- in Figure 5, the range of benchmarks and LLMs is commendable, but it is unclear why direct prompting is included in the comparison rather than other CoT-like techniques, which might provide a more meaningful comparison to LoT. Given that this study doesn’t focus on the benefit of having internal steps before LLMs generate outputs, the inclusion of direct prompting lacks relevance.\n\n- in cases where LoT performs worse than ablations or other baselines on general reasoning benchmarks, the authors provide conjectures to explain this. However, these conjectures are not substantiated by evidence (e.g., lines 428-431, 517-519).\n\n\n- the reference to the previously established CoT paradigm used in the experiments is missing.\n- the three types of bias used in the evaluation should be explicitly introduced in the main text rather than solely in the appendix.\n- providing practical examples of two-premise QA would help illustrate the generalizability of the training corpus used here and ground the analysis of the language-thought gap later in the main text.\n- in Figure 2, the arrows are stated to represent causal relations, yet it appears the arrows in the blue box denote topological order.\n- while formal propositions are generally beneficial for clarity, propositions 3.3 and 3.6 seem redundant as their informal explanations suffice, and they are not referenced further in the main text.\n- in Section 3.2, it is unclear how prompting LLMs to \"notice more details\" addresses the problem of \"ignorance of implicit premises.\" This approach may not necessarily lead to locating additional implicit premises, raising questions about the effectiveness of the LoT design.\n- the font size in Figure 5 is very small, impacting readability."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- While somewhat inline with the paper's general theme, I think it should be noted in line 44 that the large language models using CoT merely strive to simulate system 2 way of thinking via the continuous application of their inherently system 1 capabilities and their method is not a true system 2 reasoning process.\n- I think a better explanation of equation 2 would be good to have. If I am not mistaken, this highlights the fact that if topological ordering of reasoning is not preserved by the language, the LLM will only learn a causal relationship between $L_1$ and $L_A$, relegating $C_2$ to a distributional shortcut. It would be adequate to briefly explain the corresponding behavior in the text as well.\n- In comparison to CoT, what do you think are the main differences of LoT that lead to its superior performance? \n- Regarding the extraction of implicit information as well as the exploration of various reasoning paths, the following papers may be of interest: \n\t- [ 1 ] aim to get better results via the exploration of contrastive reasoning paths in CoT. \n\t- [ 2 ] discuss various prompting methods including the distillation of explicit knowledge from implicit context or the utilization of implicit knowledge.\n\t- [ 3 ] aim to ameliorate affirmative bias of CoT via the exploration of counter-paths for each LLM response. Their overall bias and GPT-4 results are also consistent with those of this paper.\n- Consider writing the formula for the \"Bias Score\" rather than its natural language explanation for better readability.\n- Line 295, \"that relevant\" -> \"that is relevant\"\n- Line 359, \"dive the data\" -> \"divide the data\"?\n- Line 468, \" since the original evaluation results consider correct formats in the incorrect formats to be incorrect answers.\" -> \"Consider correct answers in the incorrect formats\"?\n- There is a discrepancy between the prompt name in the submitted version and the TLDR version, I believe CaT should be changed to LoT or vice versa.\n\n[ 1 ] Chia, Yew Ken, et al. “Contrastive Chain-of-Thought Prompting.” _arXiv.Org_, 15 Nov. 2023\n\n[ 2 ] Yu, Fei, et al. “Natural Language Reasoning, A Survey.” _arXiv.Org_, 26 Mar. 2023\n\n[ 3 ] Miandoab, Kaveh Eskandari, and Vasanth Sarathy. “‘Let’s Argue Both Sides’: Argument Generation Can Force Small Models to Utilize Previously Inaccessible Reasoning Capabilities.” _arXiv.Org_, 16 Oct. 2024"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The general concern of the paper is both valid and very interesting to explore in the literature. It is indeed true that given the autoregressive nature of Large Language Models, one can expect their inherent modeling biases to \"leak\" into the reasoning process, contaminating the possible results.\n- The paper is well-written and well-presented, with good formulation of assumptions as well as examples for the easier understanding of the readers.\n- The proposed method is relatively simple and effective, making its use-case accessible to users and across different models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper establishes the inherent gap between the language of communication, and the language of thought, showcasing that functionally equivalent language representation of a thought process might lead to biased behavior in Large Language Models during both training and inference due to their autoregressive objective definition.\nTo mitigate this shortcoming, the paper introduces LoT, a prompting method that aims to force the Language Model to echo and expand the facts contained in the input, converting the potentially unusable implicit knowledge, to usable explicit context. \nEvaluation of the above hypothesis shows that LLMs, when prompted via LoT, showcase superior behavior with respect to model fairness and bias, while gaining an overall boost in reasoning performance. \n\nOverall, I find the proposed prompting method interesting and potentially effective under reasoning and fairness intensive tasks. However, a number of concerns such as the applicability of method and its comprehensive analysis still remain as outlined below."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Despite the success of CoT-like prompting methods in the simulation of system 2 processes, I think we have to be careful when trying to achieve system 2 way of thinking via prompting, as LLMs, due to their training objective, will always carry their linguistic capabilities and biases to all prompting methods. Further architectural changes may even be necessary to reach true system 2 capabilities. \n- Regarding the inference time linguistic bias, I disagree that implicit representations of the context completely forbid the LLM from using them in its reasoning as the implicit representation is certain to encode parts of the explicit representation in itself as well. However, it is true that it can make the reasoning more difficult. I suggest changing the language in this section to reflect that.\n- Experimental results showcasing that the training, and inference bias, as mentioned in the paper, are the reasons behind the underperforming of the models in system 2 related tasks are sparse throughout the paper. Further evaluations and theoretical considerations should be made to further bolster the proposed conjecture.\n- The ablation study shows that the effect of each component can be inconsistent across models and tasks, therefore I suggest a deeper analysis of each component, possibly via the manual investigation of the model response change based on the inclusion of a component.\n- Following from the previous point, I still find it somewhat unclear how the proposed prompting method ameliorates the model bias, I suggest further evaluations to show the change in behavior.\n- I think that the relationship between the system 2 thinking of LLMs and the proposed method can be better explored via the construction of direct links between each other and showcasing how the prompting method addresses a problem in the large language model."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* LoT requires more inference time compute than plain CoT, given that it has to echo+expand. Have you considered doing a cost analysis to show how much gains one can have, vs how many more tokens are needed?\n* For domains where CoT is already good (e.g. math), is LoT still better or equal than using CoT? That would further show the potential of the technique. More in general, what are the potential drawbacks of using LoT?\n* Consider giving the phenomenon a more unique name than “bias”, as it is a very loaded term. In the first part of the paper it appears that we will focus on evaluating societal biases, and then the experimentation broadens and shows it is a general learning bias.\n* Consider presenting the results better in Figure 5, as it is hard to grasp the overall performance loss reduction vs general gains with LoT. It would be important to grasp when the technique can offer large improvement gains, on top of the bias reduction."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* Simple prompting technique that can reduce language modeling bias (both societal and general learning biases) by echoing and expanding the required concepts before answering a complex question. I find particularly interesting that the technique shows that recent reports of performance degradation when using CoT may not be intrinsic to the step-by-step technique, but rather to the concrete execution shown in CoT.\n* The authors provide a useful intuition on why some biases may arise (Section 3), which is shared by many researchers but is important to spell it out like this work does. This intuition is useful for other applications besides the general motivation for a more structured CoT."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a prompting technique (Language of Thought, LoT) that provides a better structured step-by-step reasoning than the conventional Chain of Thought by asking the model to echo and expand the relevant information before answering. LoT alleviates the biases that are sometimes introduced by language modeling (societal or just general learning bias). LoT can also alleviate the regressions that CoT sometimes introduce in non-math domains. The paper also introduces some theory on language of thought bias, that is later used as intuition to support the design of LoT."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The theory presented is very useful to have in mind when designing prompts or generating synthetic data in general, but it is not really used anywhere in the paper except as general motivation, and it is not as formal as one would hope (e.g. propositions do not have a proof).\n* Consider rewriting the intro with more nuance, especially when describing psychological phenomena. E.g. System 1 and 2 should have more nuance, as these are useful theories, but they are not necessarily universally accepted in psychology. This psych discussion does not matter for CS research, as they are just useful analogies to our research, but the literature should be discussed well."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "This reviewer would have quite a few questions once the basic terms of the analysis are clarified.\n\nA basic one would be: why the insistence on causal relations between representations instead of deductive or inductively supported ones?\nPlease see my discussion above."
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The authors are correct that permutations of sentences or premises for reasoning can lead to untoward results in LLM performance. The ideas behind the LoT prompting style have some promise."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper attempts to distinguish between language modeling and thought modeling. While LLMs right now model linguist to imitate human reasoning, the authors claim that there is a gap between language and though, which can introduce certain biases. They propose a new type of prompting that they call Language of Thought Prompting and provide various experiments comparing Language of Thought with CoT type prompts."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While the ideas behind the new prompting style might be promising, this paper needs a radical overhaul to be acceptable. Too much in this paper is just hard to understand or vague. The leading question for the paper for example is already muddled: what does it mean to elicit the language of thought as humans??? Here is the quote: \n\"Do LLMs with CoT model and elicit the language of thoughts as humans?\"\nThe whole set involving a comparison up between language and thought in humans is somehow besides the point that the authors want to bring up. The motivation for the LoT style prompting seems to rely on is that different strings of linguistic tokens may express the same or at least very similar linguistic content to humans. This is not a language vs. thought issue but rather an issue of whether LLM objective functions as they stand, or even with CoT, capture real linguistic, semantic content. There is a lot of literature out there on insufficiencies of LLM in capturing semantic content; e.g. there's Bender, Emily M., and Alexander Koller. \"Climbing towards NLU: On meaning, form, and understanding in the age of data.\" Proceedings of the 58th annual meeting of the association for computational linguistics. 2020. But perhaps more relevant to the authors are two recent papers that look in detail at how LLMs fail to capture semantic meaning:\n \"Strong hallucinations from negation and how to fix them\" arXiv preprint arXiv:2402.10543 (2024);\n``Analyzing Semantic Faithfulness of Language Models via Input Intervention on Question Answering'' (Computational Linguistics 2024). \nSpeaking in terms of semantic contents instead of thoughts gives the authors a lot more ammunition to investigate where CoT approaches break down. Simply put, we know a lot more about the structure and features of semantic contents than we do about thoughts.\nIf the authors can show in detail that LoT methods capture semantic content better than CoT methods, that could be an important finding.\n\nThere's also a confusion between between logical and causal sequences. I quote:\n``Human conducts System 2 reasoning via the language of thoughts that organizes intermediate steps\nas a causal consequence of mental representations (Rescorla, 2024). For example, a human baby is\nable to abstract, construct, and reason over a causal map of the world in their minds.\" The important point it would seem to this reviewer in system 2 reasoning is that the intermediate steps follow each other in terms of logical or semantic consequence, not causal consequence. Or rather in a good system 2 reasoning system causal consequence and logical consequence merge. Why do I say that? Because you can have causal sequences of thoughts/representations in a psychopath that are completely crazy have no logical relation to each other and have nothing to do with type 2 reasoning. \n\n\nThe authors move quickly from evidence about thoughts without language to the thesis that \"As language is primarily a tool for communication instead of thinking and reasoning\". But there is nothing in the paper that warrants this assertion. And unfortunately,\nthis assertion is key to the paper and drives the move to find different prompts from standard CoT prompts.\n\nThe paper is woefully short on examples and often it's very difficult to understand what the authors want to say:\n\nFor example: \"Thoughts are the unobserved high-level random variables evaluated by\nbrains that drive us to generate language.\"\nIt's really hard to figure out what to do with this. And in addition it's a definition.\n\n\nanother example, 'When a premise is expressed in an implicit expression under a context, it is\nhard to notice and utilize it for downstream reasoning' what does it mean to have an implicit expression?\n\nHere's a sentence that just seems to be flat out false: \"For humans, since the language order does not determine the language\nmeaning when given proper conjunction words, one can easily change the order of presenting the\npremises in need.\" \nAs a counterexample, consider: \nJohn took off his shoes and went to bed\nvs. \nJohn went to bed and took off his shoes\n\nThe meaning conveyed by these two sentences is quite different. Changing the order of sentences often changes the meaning of a text.\nThis is part of the study of discourse structure and how it's formally interpreted. E.g. N. Asher & A. Lascarides, Logics of Conversation, \nCambridge University Press, 2003. And unfortunately this assumption seems to be key to distinguishing LoT from CoT\n\n\nA lot of the sentences in this paper aren't English or well formed in any language I know of . Eg. \"The Interplay between language and thoughts has intrigued a long historical discussion about\nthe role of language in human thinking in the literature (Rescorla, 2024; Fedorenko et al., 2024).\"\nIssues don't intrigue a historical discussion. In addition, if the history is long, why cite people from 2024? The authors might \nstart by citing Fodor Language of Thought (1975) but actually the issue already arises with early medieval thinkers like Saint Augustine and his concept of the \"verbum mentis\". See the Stanford Encyclopedia article on medieval semiotics.\n\n\nAgain: \"Consequently, modeling thoughts merely from the language\ncan easily integrate the language modeling biases into the learned model, such as the order (Wei et al.\"\nthis is largely unintelligible, and at this point language modeling biases haven't been defined.\n\n\nThe expanding thought part of the proposed prompt is too vague to be at all useful as it currently stands.\n\"instruct the model the expand those\" is not English or comprehensible. Please rephrase"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We demonstreate the gap of LLMs in modeling human thoughts for system 2 reasoning and propose Call-of-Thoughts to alleviate the gap."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024on,\ntitle={On the Language of Thoughts in Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3wrMRYuLlQ},\nnote={under review}\n}"
},
"abstract": {
"value": "System 2 reasoning is one of the defining characteristics of intelligence, which requires slow and logical thinking. Human conducts System 2 reasoning via the language of thoughts that organizes the reasoning process as *a causal sequence of mental language*, or thoughts. Recently, it has been observed that System 2 reasoning can be elicited from Large Language Models (LLMs) pre-trained on large-scale natural languages. However, in this work, we show that there is a significant gap between the modeling of languages and thoughts. As language is primarily a tool for humans to share knowledge and thinking, *modeling human language can easily integrate into language biases* that are not related to thoughts. Furthermore, we show that the biases may mislead the eliciting of “thoughts” in LLMs to focus only on a given part of the premise. To this end, we propose a new prompt technique termed **Ca**ll-of-**T**houghts ( CaT ) to alleviate the issue. Instead of directly eliciting the chain of thoughts from the potentially biased information, CaT instructs LLMs to focus and expand based on all the relevant information. We show that the simple strategy significantly reduces the language modeling biases in LLMs and improves the performance of LLMs across a variety of reasoning tasks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Language models",
"system 2 reasoning",
"language of thoughts"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/1c2ed0377711d4baf2c5584ce59073e4f03dfb2f.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "On the Language of Thoughts in Large Language Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3x4vpeAclU | Enhancement of In-Context Reasoning in LLMs through Inductive Rule Learning | main | Desk Reject | In-Context Learning;Inductive Reasoning | generative models | Tien-Dat Nguyen;Hai-Toan Nguyen;Nguyen Viet Ha | ~Tien-Dat_Nguyen1;~Hai-Toan_Nguyen1;~Nguyen_Viet_Ha1 | 0 | 0 | 0 | 0 | 0 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": {
"value": "The submitted PDF is a placeholder and not a valid submission."
},
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": {
"value": "Submission Desk Rejected by Program Chairs"
},
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@misc{\nnguyen2024enhancement,\ntitle={Enhancement of In-Context Reasoning in {LLM}s through Inductive Rule Learning},\nauthor={Tien-Dat Nguyen and Hai-Toan Nguyen and Nguyen Viet Ha},\nyear={2024},\nurl={https://openreview.net/forum?id=3x4vpeAclU}\n}"
},
"abstract": {
"value": "Currently, Large language models (LLMs) have achieved remarkable performance across various language tasks, largely due to their training on extensive datasets and their considerable model size. These models exhibit in-context learning abilities, which is to learn through few-shot learning. However, the underlying reasoning process remains ambiguous, it is unclear whether the model simply retrieves relevant information and instructions from its training data to generate similar responses, or whether it generalizes examples to form overarching rules, which are then applied to produce accurate answers. Another method for improving few-shot learning is Chain-of-Thought prompting that complement steps by steps instruction for LLMs, so they can follow this instruction to solve many reasoning tasks. Several approaches for evaluating the reasoning abilities of LLMs typically involve task-solving through code generation, which enables models to formalize problems and leverage a code compiler to solve them precisely. However, these methods are constrained to specific task types and are insufficient for a comprehensive assessment of the model's broader reasoning capabilities. Therefore, this paper proposes a method to enhance in-context learning capabilities through two main stages: generating general rules from the provided examples and utilizing LLMs to verify these general rules, thereby aiming to improve reliability and accuracy. At the same time, this approach seeks to investigate the inductive and deductive reasoning abilities, and can improve our understanding of the model’s reasoning by generating and applying general rules to provide transparent, clearly explained responses. The proposed method demonstrates competitive performance on the 1D-ARC benchmark and several traditional language tasks, suggesting its potential for more robust evaluation of LLM reasoning abilities."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": {
"value": [
"~Tien-Dat_Nguyen1",
"~Hai-Toan_Nguyen1",
"~Nguyen_Viet_Ha1"
]
},
"authors": {
"value": [
"Tien-Dat Nguyen",
"Hai-Toan Nguyen",
"Nguyen Viet Ha"
]
},
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"In-Context Learning",
"Inductive Reasoning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": {
"value": "nguyen|enhancement_of_incontext_reasoning_in_llms_through_inductive_rule_learning"
},
"pdf": {
"value": "/pdf/6d6d76c526eaac210aa74569ad6fa951ecbd74d2.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Enhancement of In-Context Reasoning in LLMs through Inductive Rule Learning"
},
"venue": {
"value": "ICLR 2025 Conference Desk Rejected Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Desk_Rejected_Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
||||||||||
3xjc9PhEPd | Empirical Guidelines for Deploying LLMs onto Resource-constrained Edge Devices | main | Active | On-device Learning;Edge Computing;Efficient ML;Large Language Models | datasets and benchmarks | 3;5;5;6 | 4;5;3;3 | 1;2;3;4 | 1;1;2;3 | 2;2;3;3 | 4.75 | 3.75 | 2.5 | 1.75 | 2.5 | -0.345857 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.\tThe remark in Section 3.1 suggests that increasingly complex tasks require stronger models. This is a commonly understood point and lacks novelty.\n2.\tIn Section 3.1, it is suggested that RAG is more suitable for tasks of moderate difficulty. However, according to Figure 2 and Table 1, there is no significant difference between LoRA and RAG in terms of performance.\n3.\tFive fine-tuning methods are compared in Section 3.1, but the first three are weaker than LoRA and RAG, offering limited practical insight.\n4.\tSection 3.2 suggests that smaller values for rank and alpha are more suitable for resource-constrained environments, but this finding also lacks innovation. Additionally, the models discussed are relatively small, making them inherently more compatible with LoRA in limited-resource scenarios, which somewhat disconnects the findings from real-world edge limitations.\n5.\tThe discussion on training duration in Section 3.2 does not specify which type of device is being considered. In the Appendix, the devices listed range from 4GB to 16GB of RAM, which would result in significantly different feasible training times.\n6.\tSection 3.4 proposes using only a limited amount of historical data in RAG, yet given the privatized edge LLM scenario suggested by the authors, it is realistic that users would only have access to a finite amount of data rather than unlimited data.\n7.\tThe performance loss due to model compression is a well-known trade-off rather than a novel finding specific to edge LLMs.\n8.\tAlthough the paper is framed as addressing the deployment of edge LLMs, it mainly focuses on the private fine-tuning of models. Important aspects of deployment, such as inference, are not covered."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This study is very comprehensive, employing a wide range of models and constructing various scenarios and methods for comparison."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors conducted extensive experiments and benchmarking to provide empirical guidelines for deploying large language models (LLMs) on edge devices, with a focus on fine-tuning models in private settings. However, the final conclusions of the paper align largely with common sense, and in some areas, the study lacks novelty."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Although this paper provides an empirical guideline through extensive experimentation, many of the conclusions are quite intuitive, lacking some innovative findings."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "The paper primarily evaluates popular pre-trained LLMs like Llama and OPT, with various modifications. There is little exploration of alternative architectures that could be inherently better suited for edge deployment."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Adequate experimental evaluation is carried out in this paper.\n2. The topic of deploying LLM at the edge is interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on providing empirical guidance on LLM deployment on resource-constrained edge devices. The research focuses on how to optimize the design choices of LLMs in a resource-limited environment, balancing the computational demands with the capabilities of the devices."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper primarily restates existing strategies for model deployment and optimization, lacking substantial innovation. The guidelines and strategies discussed, such as model compression, parameter-efficient fine-tuning (PEFT), and retrieval-augmented generation (RAG), are already well-documented methods in the machine learning field. The paper offers an empirical evaluation rather than a novel methodological contribution.\n\n2. The experiments are largely confined to synthetic and benchmark datasets, which may not adequately represent the diversity of real-world scenarios where edge LLMs are deployed. This limits the applicability of the guidelines to practical use cases involving more dynamic environments."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please see above."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "This is an important and timely topic. If done correctly, such study can provide practical guidelines for researchers and developer."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies LLM personalization at resource-constrained edge devices, investigating the effect of design choices (e.g. what model to use, what personalization technique to apply) on the performance (e.g. accuracy). This is done by running a set of experiments to observe the effect of each choices. According to these observations, a set of guidelines are proposed."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The study does not have the necessary depth. In particular, it reports only single-run experiments. It is however essential for an experimental study of this nature to perform multiple experiments per setting, enabling statistical comparison of different design choices. For instance, by providing means/medias and confidence intervals, one can assess if design choice A achieves a statically significant improvement over design choice B. I recommend the authors to follow approaches such as experimental design [1] to enhance the robustness of the study.\n \nBesides, the paper fails to provide clear guidelines on how select the design choices. For example, the text below is from Section 3.1.:\n\n“As task difficulty increases, such as with complex classification tasks and simple summarization tasks, the choice should gradually shift to RAG with the strongest model. Here, the strongest models are (quantized) LLMs that excel at general benchmarks and fit within the RAM constraint.”\n\nWhat are specific criteria on deciding what is a complex classification task? Does it depend on the number of classes? Or on the task? Is it possible to provide some quantitative measures on what is a complex classification task? Also, what does mean gradually? What I get from this guideline is some general roles, but it does not help me to make a clear decision.\n\nFinally, as I understand, the selected datasets for the fine-tuning process are available online, at Github. So, there is a possibility that the models which are studied in this paper have been already exposed to these datasets during the pre-training. This could change how we interpret the results, as fine-tuning a model over a subset of its training data is usually an easier task than fine-tuning over new (unseen) dataset. Please elaborate more on this aspect.\n\n[1]. R. A. Fisher et al. The Design of Experiments. Number 5th ed. Oliver and Boyd, London and Edinburgh, 1949."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Including some curiosity in weakness, I also raise the following questions.\n\n- Appendix A.1 shows different edge devices. I suggest the authors include and compare the computing resources, not only RAM, of the devices to give readers a better understanding of the range of resources considered in the study\n- In Table 3, the data samples per hour measured by the A10 GPU are illustrated. However, as stated earlier, processors like CPUs, GPUs, and mobile-targeted SoCs show significantly different computation times. Is the focus on computing resources solely on memory?\n- RAG and LoRA are quite different techniques, with one altering the model structure and the other not. I also think they serve different purposes. Can they be considered as alternatives to one another?\n- (minor) Rather than “performance,” please specify the exact metrics measured\n- Figure 9 shows five models, not 6, as mentioned in the caption.\n- The results suggest that insights vary significantly based on task type or complexity. Is there a way to categorize or quantify task characteristics or difficulty before running the tasks?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- As LLM deployment differs significantly on edge devices due to limited computing resources and the use of private data, the comprehensive evaluation and criteria for optimization techniques are crucial topics\n- The authors conduct comprehensive and well-defined experiments\n- The authors’ findings are novel, as they can guide future research directions"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses various design and deployment choices for running LLMs on edge devices. I enjoyed reading this paper, as the authors provide insights with clear and empirical experimental results."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- In the introduction, many statements and findings seem to pertain to training (fine-tuning). Clearly stating the focus and deployment scenario would make the paper’s sections clearer (fine-tuning, inference, or both?)\n- Instead of focusing on individual techniques for LLM deployment, what about combining two or three methods? For example, RAG and LoRA could potentially be applied together\n- I understand the page limit and the authors’ efforts to address this issue, but many supporting results are in the Appendix, making it somewhat difficult to follow thoroughly.\n- The authors experiment with specified models on particular devices, considering memory constraints. What about testing similar models on different devices? Training a model for one hour on different devices might yield different insights\n- I believe many aspects, including the combination of design choices, remain unexplored in this study. Clearly specifying which aspects or potential experiment cases are covered and which are not would help readers better understand the study’s scope"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Provide Empirical guidelines for deploying and using LLMs on edge devices"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024empirical,\ntitle={Empirical Guidelines for Deploying {LLM}s onto Resource-constrained Edge Devices},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3xjc9PhEPd},\nnote={under review}\n}"
},
"abstract": {
"value": "The scaling laws have become the de facto guidelines for designing large language models (LLMs), but they were studied under the assumption of unlimited computing resources for both training and inference. As LLMs are increasingly used as personalized intelligent assistants, their customization (i.e., learning through fine-tuning) and deployment onto resource-constrained edge devices will become more and more prevalent. An urgent but open question is how a resource-constrained computing environment would affect the design choices for a personalized LLM. We study this problem empirically in this work. In particular, we consider the tradeoffs among a number of key design factors and their intertwined impacts on learning efficiency and accuracy. The factors include the learning methods for LLM customization, the amount of personalized data used for learning customization, the types and sizes of LLMs, the compression methods of LLMs, the amount of time afforded to learn, and the difficulty levels of the target use cases. Through extensive experimentation and benchmarking, we draw a number of surprisingly insightful guidelines for deploying LLMs onto resource-constrained devices. For example, an optimal choice between parameter learning and RAG may vary depending on the difficulty of the downstream task, the longer fine-tuning time does not necessarily help the model, and a compressed LLM may be a better choice than an uncompressed LLM to learn from limited personalized data."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"On-device Learning",
"Edge Computing",
"Efficient ML",
"Large Language Models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/711b4df3bdb10abdaa5b085cebd86b48cc92f62d.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Empirical Guidelines for Deploying LLMs onto Resource-constrained Edge Devices"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3xpTXF5ALZ | AI2TALE: An Innovative Information Theory-based Approach for Learning to Localize Phishing Attacks | main | Active | Phishing Attacks;Email Phishing Attack Localization;Interpretability and Explainable AI;Deep Learning | interpretability and explainable AI | 3;8;8 | 4;4;3 | 2;3;4 | 1;3;3 | 1;3;4 | 6.333333 | 3.666667 | 3 | 2.333333 | 2.666667 | -0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "To the best of my knowledge, I see no ethical concerns regarding this work."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1) In line 271, you mention that $p(\\tilde{x_i}|X)$ is a Gaussian mixture distribution. Why did you choose this distribution? Did you evaluate the effect of this assumption on your results?\n \n 2) In your algorithm (see line 333), you split an email into sentences based on periods or commas. Have you examined whether this sentence-splitting method impacts your results? There are alternative ways to split text, such as by a specified number of characters.\n \n 3) Based on the main paper and appendices, it appears you developed a feed-forward neural network (see lines 879–881) for the selection model in your algorithm (see line 339). My understanding is that this neural network creates word embeddings based on the provided texts and then focuses on detecting phishing attacks. Why did you choose this model instead of a more advanced architecture, such as a transformer?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "Overall, this is a well-written and clearly explained work. The motivation and contributions are effectively communicated, as are the details of the algorithm, including the model architecture and data. According to the results presented in the tables, the AI2TALE model achieves state-of-the-art performance in detecting phishing attacks based on two measures—Label-Accuracy and Cognitive-True-Positive—which the authors believe are more appropriate for this task. The model also achieves state-of-the-art results with the well-known F1 score. The results are also validated by humans who found this model helpful. Another strength of this paper is that the results of this information-theory-based model are explainable, a claim that appears valid based on the presented figures. Finally, all code is reproducible and open-source."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work addresses the problem of phishing attacks and introduces an information theory-based model called AI2TALE to detect them without requiring ground truth while also providing explainable results. The authors validate this model on seven diverse real-world email datasets, demonstrating that the AI2TALE model achieves state-of-the-art results. The model is also tested with human participants, who found it helpful for detecting phishing attacks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I have included a few comments regarding the presentation and also have some feedback on the experimental section.\n \n A) Presentation Comments\n \n A1) In lines 058 to 060, where you mention that early-stage AI approaches are among the most effective solutions for preventing and reducing negative effects, this seems like a strong statement. It would be beneficial to include a citation to support this claim.\n \n A2) In line 152, you introduce the term \"AI2TALE\" for the first time as the name of your model. It would be helpful to clarify what this acronym stands for and briefly explain your rationale behind the name selection.\n \n A3) In line 330, you refer to your method as “Algorithm 1.” Since this is the only algorithm presented in your paper, it is unnecessary to number it as \"1.\" Consider renaming it simply as “Algorithm” throughout the text.\n \n A4) In Section 4.1, “Studied Datasets” (lines 360-371), you list several datasets used in your research. Please add citations and links for each referenced dataset. Additionally, include citations and links in Section 6.2 of the appendices (lines 778-788).\n \n A5) In line 404, you mention that readers seeking more details can refer to the appendices. Please specify the exact section or appendix that contains the relevant information.\n\n A6) In Section 6.2 of the appendices (lines 789-799), this paragraph appears to be more relevant to the following section, 6.3, on data preprocessing and embeddings.\n\n A7) The link of your code, see line 897, should be in the main paper, such as in the introduction. \n\nA8) As mentioned in the author guidelines (see https://iclr.cc/Conferences/2025/AuthorGuide), you are encouraged to include a Reproducibility Statement at the end of the main text. This statement should detail the efforts made to ensure reproducibility and include any necessary information for those wishing to replicate your results.\n\n A9) You have referenced a good selection of papers with nice variation; however, I think a few relevant papers are missing. These include \"Feature-based Learning for Diverse and Privacy-Preserving Counterfactual Explanations\" by Vy Vo et al., \"The Anatomy of Deception: Measuring Technical and Human Factors of a Large-Scale Phishing Campaign\" by Anargyros Chrysanthou et al., \"Towards Modeling Uncertainties of Self-Explaining Neural Networks via Conformal Prediction\" by Wei Qian et al., and \"DIB-X: Formulating Explainability Principles for a Self-Explainable Model Through Information Theoretic Learning\" by Changkyu Choi et al. In the related work section of the appendices or the main paper, you could also include additional studies based on information theory that are not necessarily related to phishing attacks.\n \nB) Experimental Section Comments\n \n B1) In Table 1 (line 424), you present the results for each model. Since you used multiple datasets, it would be helpful to show the results for each dataset separately.\n \n B2) In Table 2 (line 443), it would be nice to include the second and third sentence top sentences of the model highlighting them with different colors and briefly discussing them.\n \n B3) In the human evaluation section (line 493), please include any available statistics on the participants, such as their expertise, gender, education level, etc.\n\n B4) In Section 6.7 of the appendices (lines 899–960), Table 3 presents results that, in my opinion, are quite significant and provide further support for the strength of your model. It would be better to incorporate the results from Table 3 into Table 1 and remove the \"average\" section."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "The paper presents results from a human study but do not provide operationalization details. Potential risk for leading questions/biased results."
},
"flag_for_ethics_review": {
"value": [
"Yes, Responsible research practice (e.g., human subjects, data release)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Please see below some questions and comments to improve the manuscript:\n\n- Overall, the terminology used in this paper, e.g., phishing attack localization and phishing vulnerability are atypical for the cybersecurity community. Page 1, para 2 states that AI has been used for malware vulnerability detection. Malware detection and vulnerable code detection are two entirely different sub-domains. Moreover, page 3 states that post-hoc explanations do not offer a comprehensive understanding of the model's internal architecture or workings. Overexaggerated claim. Model-based XAI techniques like integrated gradients do exactly that! Deep models are also called \"self-explanatory\" methods in the manuscript. Again, popular literature dictates that deep learning is inherently a black box and thus cannot be self-explanatory/interpretable. The authors are advised to revise the manuscript and avoid redefining terms that already exist in the domain.\n\n- This work is not the first or the only work that labels an object as phishing and identifies which features led to the classification decisions. Could the authors present a comparative analysis against existing methods, e.g., [1-2], and justify why they were not mentioned in the manuscript? \n\n- Please add references for the datasets mentioned on page 7.\n\n- Please include the definition of the evaluation metrics (especially cognitive true positive) in the main body text (Section 4.2). Without the definition, it is impossible to understand the results. \n\n- The significance of the results in table 1 are hard to interpret because the authors do not provide any information regarding the dataset, e.g., number of phishing emails, sentences per email etc. Moreover, the improvement in the results is marginal (1-3%). Does the proposed approach provide benefits beyond the results in Table 1?\n\n- Please also report false positives and false negatives for each method in table 1. FPs are especially a big problem in cybersecurity tasks.\n\n- Please provide more details regarding the human study. 25 participants don't seem statistically significant for 10 emails, each with 5 options, as estimated by Cochran's formula. Moreover, it is unclear what questions are asked or how they are phrased exactly. For instance, if the participants are always shown the top-1 sentence and not given an option of top-3 sentences, this can very likely be interpreted as a rhetorical question, which leads people to respond in the affirmative most of the time. The authors are advised to increase the sample size to a statistically significant level, provide the exact phrasing of questions asked, and include control questions or alternative options (like top-3 sentences) to reduce bias."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "It is good to focus on explainable approaches so we can understand what ML models do. The method uses information theory approaches, which is indeed an innovative angle."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper builds a system that identifies both, whether an email text is a phishing attack and which top-1 sentence in the email has the highest contribution to the classification label. The method is based on information theory, which is used by the model to distribute importance weights to features. The paper claims that this is one of the first works in phishing localization and that XAI techniques have not been applied to phishing attacks in the literature. By comparing against 7 open source datasets and 5 popular baseline methods, the method shows an improvement of 1.5-3.5% over baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Firstly, the claim that this is one of the only works for interpretable phishing localization might be inflated. There is plenty of work in the domain of phishing attack detection that builds ML models and applies XAI techniques to understand why something was labelled as a phishing attempt, see e.g., [1-2] as just two random examples. These techniques even go beyond text and also tag why images or URLs were considered phishing. Thus, the novelty of this approach is severely lacking. Please provide a more detailed comparison of the proposed approach to existing XAI techniques in phishing detection, highlighting specific differences in methodology or capabilities.\n\nSecond, one of the most popular explanation techniques for text, especially for deep nets is the attention mechanism. The proposed approach seems to somewhat achieve a similar goal by learning one model that both classifies text as phishing and also discovers features (sentences) that led to the classification decision. Yet, the paper makes no mention of this popular technique, nor do they compare with it in terms of fidelity, speed, correctness, etc. Thus, there is insufficient comparison with existing methods. The authors are advised to include a comparison with attention-based methods, evaluating aspects like fidelity, speed, and correctness. \n\nThird, the writing can substantially be improved. There are several typos that hinder comprehension. A non-exhaustive list: page 2, second last para -> which information causes. page 4, first sentence -> given an email. page 5, line 255 -> we encourage. \n\n\n[1] Chai, Yidong, et al. \"An explainable multi-modal hierarchical attention model for developing phishing threat intelligence.\" IEEE Transactions on Dependable and Secure Computing 19.2 (2021): 790-803.\n[2] Lin, Yun, et al. \"Phishpedia: A hybrid deep learning based approach to visually identify phishing webpages.\" 30th USENIX Security Symposium (USENIX Security 21). 2021."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "I don't think any additional ethical review is required, but it could be good if the authors added a more elaborate discussion of the ethical implications of deploying this AI-based system in real-world applications, particularly concerning privacy concerns and the risk of misclassification."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed.",
"Yes, Discrimination / bias / fairness concerns"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- The paper compares the proposed method with state-of-the-art AI-based approaches. Did you consider including a more elaborate discussion of how traditional (non-AI) phishing detection methods, such as URL filtering and white and blacklisting IPs, sandboxing, etc., can be complemented by or replace improvements offered by AI2TALE?\n- Can you elaborate on what defense mechanisms organizations should prioritize to counter the increasingly sophisticated, region-specific threats posed by localizaion?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Strengths:\n- Original Contribution: The proposed method (enhances explainability) is a significant improvement in phishing defense and a good use of AI.\n- The use of information theory and the mutual information training principle to select relevant features is well-founded. The introduction of a selection network that utilizes latent variables to identify important sentences is also good.\n- The authors provide thorough evaluations across seven diverse datasets. The comparative performance metrics (Label-Accuracy and Cognitive-True-Positive) are appropriate and lend credibility to their claims of improved performance over baseline models.\n- The paper is well-organized, with clear sectioning and logical flow. The introduction succinctly frames the problem and the proposed solution, making it accessible even to readers less familiar with the technical intricacies.\n- The use of human evaluations to assess the interpretability of the selected sentences enhances the practical relevance of the findings."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a novel approach to phishing attack localization using deep learning and information theory principles. The method aims to classify emails as phishing or benign and explain these classifications by identifying the most relevant information within the emails. In short, phishing attack localization aims to customize phishing attacks to align with a target region or group's specific cultural, linguistic, and contextual characteristics. The paper examines how localized phishing tactics, such as using native language, regional events, or imitating local institutions, enhance the effectiveness of phishing campaigns by increasing their credibility. The study compares localized versus generic phishing attacks and highlights key factors that improve success rates. It also discusses the growing role of AI in automating localization efforts, presenting a challenge for organizations to develop stronger, region-specific defense mechanisms to counter these advanced attacks. The authors have conducted extensive experiments on multiple datasets to demonstrate the efficacy of their approach. Overall, it’s an interesting paper with a sound contribution."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Weaknesses:\n- It could be good to add a more elaborate discussion of the ethical implications of deploying this AI-based system in real-world applications, particularly concerning privacy concerns and the risk of misclassification.\n- The authors acknowledge potential limitations regarding selecting non-relevant sentences. A more thorough discussion of these limitations and how they might affect real-world applications would be good. For example, the risk of misclassification and potential consequences of user confusion arising from irrelevant explanations.\n- The paper mentions the introduction of hyperparameters in the mutual information maximization and data-distribution mechanisms. It would be good to elaborate on the model’s sensitivity to these hyperparameters and their impact on the performance. It would also be beneficial for the authors to discuss how the introduced hyperparameters are likely to generalize to different contexts and model types. Understanding whether these hyperparameters remain effective across various datasets or when applied to different phishing detection frameworks.\n- The results section could benefit from more visualizations (figures or tables) to make the results more digestible and better illustrate the performance improvements."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We study an important problem of phishing attack localization aiming to tackle and improve the explainability (transparency) of email phishing detection. AI-based techniques for this problem have not yet been well studied."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024aitale,\ntitle={{AI}2{TALE}: An Innovative Information Theory-based Approach for Learning to Localize Phishing Attacks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3xpTXF5ALZ},\nnote={under review}\n}"
},
"abstract": {
"value": "Phishing attacks remain a significant challenge for detection, explanation, and defense, despite over a decade of research on both technical and non-technical solutions. AI-based phishing detection methods are among the most effective approaches for defeating phishing attacks, providing predictions on the vulnerability label (i.e., phishing or benign) of data. However, they often lack intrinsic explainability, failing to identify the specific information that triggers the classification. To this end, we propose an innovative deep learning-based approach for email (the most common phishing way) phishing attack localization. Our method aims to not only predict the vulnerability label of the email data but also provide the capability to automatically learn and figure out the most important and phishing-relevant information (i.e., sentences) in the phishing email data, offering useful and concise explanations for the identified vulnerability. \n\nThe extensive experiments on seven diverse real-world email datasets demonstrate the capability and effectiveness of our method in selecting crucial information, enabling accurate detection and offering useful and concise explanations (via the most important and phishing-relevant information triggering the classification) for the vulnerability of phishing emails. Notably, our approach outperforms state-of-the-art baselines by 1.5% to 3.5% on average in Label-Accuracy and Cognitive-True-Positive metrics under a weakly supervised setting, where only vulnerability labels are used without requiring ground truth phishing information."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Phishing Attacks",
"Email Phishing Attack Localization",
"Interpretability and Explainable AI",
"Deep Learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/98334b1cd3da556b99be966a527f6ba4e855eb45.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "AI2TALE: An Innovative Information Theory-based Approach for Learning to Localize Phishing Attacks"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3xqqYOKILp | BrainOOD: Out-of-distribution Generalizable Brain Network Analysis | main | Active | Out-of-distribution Generalization;Brain Network Analysis;Graph Representation Learning | learning on graphs and other geometries & topologies | 3;5;5 | 4;4;4 | 3;3;2 | 2;2;2 | 2;3;2 | 4.333333 | 4 | 2.666667 | 2 | 2.333333 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. How do you balance the four losses in the proposed method? Given the numerous modules and hyperparameters involved, does training the model from scratch carry a high risk of overfitting?\n2. Considering the frequent occurrence of the OOD generalization problem in brain network analysis, how could the proposed method be adapted or transferred to other models?\n3. Since the performance of fMRI-derived brain networks on the ADNI dataset is lower than that of structural MRI, do you believe it is appropriate or necessary to use it as a benchmark for the OOD generalization problem?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- **Originality**: This paper demonstrates a notable level of novelty, particularly in its combined approach of selecting critical node features and graph structures, along with the batch-level loss designed to identify key discriminative connections.\n- **Quality**: The methodology is thoroughly evaluated through comparisons with 16 existing methods across two datasets (ABIDE and ADNI), effectively highlighting its effectiveness and efficiency.\n- **Significance**: This research provides valuable insights into addressing the OOD problem in brain network analysis, contributing meaningfully to advancements in neuroscience."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work addresses the out-of-distribution (OOD) problem in brain network analysis. It introduces a framework called BrainOOD, which consists of a feature selector and a structure extractor. By filtering out noisy nodes and edges and enforcing the model to consistently select the same connections across all brain networks within each batch, the proposed method achieves strong performance on the ABIDE and ADNI datasets. Additionally, visualization results are provided to illustrate the method’s effectiveness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- **Contribution of the Benchmark**\n\n**The claim of introducing the first benchmark seems somewhat overstated.** The ABIDE and ADNI datasets have been long established in brain network analysis and are widely used for evaluating brain disorder diagnosis models. Simply partitioning these datasets to create an OOD scenario may not constitute a significant contribution.\n\n- **Alignment of Motivation, Method, and Analysis**\n\nThe motivation of this work is to address the OOD generalization problem. However, **it is not clearly explained how the proposed method specifically tackles this issue**. While reducing noisy nodes and structures could indeed improve brain disorder diagnosis performance, the methodology and interpretive analysis lack clarity on how this approach mitigates the OOD generalization problem. For instance, visualizing the top 10 connections with the highest scores on both the ABIDE ID and ABIDE OOD sets could help demonstrate the method’s generalizability more effectively.\n\n- **Paper Organization**\n\nThe organization of the paper could be improved for clarity. **It may not be necessary to dedicate extensive sections to GNN and brain network fundamentals.** Additionally, placing the related work section directly after the introduction or immediately before the conclusion could improve the flow and readability."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Please see the weakness above. \n\n- In addition, can you provide an ablation study on the feature selector and structure extractor by evaluating configurations such as $(X', A)$ and $(X, A')$? These results would help to clearly demonstrate the contribution of each module. Additionally, similar to the discussion on edge scores, the node mask should also be examined to strengthen the claim that the proposed method yields clinically relevant results.\n\n- While several GNNs and HPGNN are incorporated into the framework, certain aspects remain unclear. Specifically, what advantage does using HPGNN with multiple layers (hops) offer over simply multiplying the graph Laplacian matrix, especially if the goal is to capture deviations from local patterns? Furthermore, given your assertion that the brain structure matrix A contains noise, why did you choose to retain A rather than use A’ during feature selection?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper addresses a critical gap in brain network analysis by focusing on OOD generalization and interpretability, which are essential for deploying models in real-world settings. The work has high significance for the medical and neuroscience community. \n- It presents a framework that improves diagnostic tools for neurological disorders like AD and ASD, potentially leading to earlier and more accurate diagnoses. \n- The authors evaluate their method across two major datasets (ABIDE and ADNI) and compare it with 16 baselines including brain-specific networks, which adds credibility to their results. \n- The alignment of identified brain patterns with known neuroscience findings lends additional weight to the framework's interpretability. Also, ablation study demonstrates the needs of each loss types."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents BrainOOD, a framework designed to address the challenges of Out-of-Distribution (OOD) generalization in brain network analysis. Specifically, BrainOOD aims to enhance the performance and interpretability of Graph Neural Networks (GNNs) in diagnosing Alzheimer’s Disease (AD) and Autism Spectrum Disorder (ASD). The method incorporates a feature selector, structure extractor, and auxiliary losses, leveraging the Graph Information Bottleneck (GIB) framework to recover causal subgraphs. Through extensive experiments on those datasets, the framework demonstrates competitive performance, outperforming baseline models in OOD settings."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) The technical contribution of this paper appears to be marginal despite addressing the OOD generalization problem and enhancing interpretability in brain network analysis. While the introduction of an OOD benchmark for brain networks is appreciated, it is unclear if this benchmark adds novel challenges beyond those already present in multi-site datasets like ABIDE and ADNI. Furthermore, many of the technical components, such as the auxiliary losses and discrete sampling strategy, are borrowed from existing work. Although the paper effectively motivates the need for the Graph Information Bottleneck (GIB) framework, the core technical innovations do not extend significantly beyond prior work.\n\n2) One of the primary technical contributions --- feature selection mechanism --- lacks clarity in its formulation. Specifically, the intuition behind $\\hat{X}$ derived from the covariance of $\\hat{H}$ and the use of the $tanh()$ as activation function is not well explained, leaving readers uncertain about the necessity of these design choices. \n\n3) The definition of the OOD problem itself also raises concerns. Table 2 indicates insignificant performance differences between in-distribution (ID) and OOD scenarios, even with the Empirical Risk Minimization (ERM) baseline, suggesting that the OOD scenario may not be as challenging as claimed. This raises the possibility that the proposed framework performs effectively only under moderate distribution shifts. Additionally, the paper would benefit from comparing the performance of other brain-specific models, such as BrainNetCNN or BrainGNN, under the same OOD conditions to better contextualize the reported improvements.\n\n4) Grammar should be double checked."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. For the adjacency matrix, were the top 20% connections identified based on correlation magnitude (including both positive and negative correlation)?\n2. The classification setting (6-class) on the ADNI dataset. It is confusing to have three classes related to MCI (MCI, EMCI, and LMCI), which may affect the evaluation results. EMCI and LMCI are used in ANDI GO/2, while MCI used in ADNI 1 is deemed LMCI. A 5-class (CN, SMC, EMCI, LMCI, AD) setting is more reasonable.\n3. It would be helpful to add more description about how the reconstruction loss can help select informative features.\n4. It is not clear how in-domain testing was performed. \n5. What are the differences between the 10-fold-CV and the overall test in Tabel 2 and 3?\n6. For evaluation, it is better to add some conventional ML methods (e.g., SVM) as baseline.\n7. What does the ID and OOD checkpoints mean in Fig.3? The edge score seems quite low (max value around 0.08, Fig.3 top left), how many edges were generally included in the extracted sub-graph?\n8. There are several other parameters in the framework (e.g., temperature in eq.11, number of sampling k for the final prediction). How do they affect the performance?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "It is novel to simultaneously identify informative features and extract causal subgraph for brain functional network based prediction."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents BrainOOD, a novel GNN framework tailored for brain functional network analysis, which consists of feature selector and causal subgraph extractor for brain functional network to enhance the generalization to out-of-distribution dataset. The proposed framework has been evaluated on two multi-site datasets and demonstrated improved classification performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Several descriptions are not clear. Please refer to the Questions section for details.\n2. The classification setting (6-class) on the ADNI dataset. It is confusing to have three classes related to MCI (MCI, EMCI, and LMCI), which affects the evaluation results. EMCI and LMCI are used in ANDI GO/2, while MCI used in ADNI 1 is deemed LMCI. A 5-class (CN, SMC, EMCI, LMCI, AD) setting is more reasonable."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "BrainOOD boosts GNN generalization and interpretability for brain networks, outperforming 16 methods and introducing the first OOD benchmark."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024brainood,\ntitle={Brain{OOD}: Out-of-distribution Generalizable Brain Network Analysis},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3xqqYOKILp},\nnote={under review}\n}"
},
"abstract": {
"value": "In neuroscience, identifying distinct patterns linked to neurological disorders, such as Alzheimer’s and Autism, is critical for early diagnosis and effective intervention. Graph Neural Networks (GNNs) have shown promising in analyzing brain networks, but there are two major challenges in using GNNs: (1) distribution shifts in multi-site brain network data, leading to poor Out-of-Distribution (OOD) generalization, and (2) limited interpretability in identifying key brain regions critical to neurological disorders. Existing graph OOD methods, while effective in other domains, struggle with the unique characteristics of brain networks. To bridge these gaps, we introduce \\textit{BrainOOD}, a novel framework tailored for brain networks that enhances GNNs’ OOD generalization and interpretability. BrainOOD framework consists of a feature selector and a structure extractor, which incorporates various auxiliary losses including an improved Graph Information Bottleneck (GIB) objective to recover causal subgraphs. By aligning structure selection across brain networks and filtering noisy features, BrainOOD offers reliable interpretations of critical brain regions. Our approach outperforms 16 existing methods and improves generalization to OOD subjects by up to 8.5\\%. Case studies highlight the scientific validity of the patterns extracted, which aligns with the findings in known neuroscience literature. We also propose the first OOD brain network benchmark, which provides a foundation for future research in this field."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Out-of-distribution Generalization",
"Brain Network Analysis",
"Graph Representation Learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/c94ec9f94d0189748d4f620cb2dbc91600abe8c9.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on graphs and other geometries & topologies"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/a34932cb521b3c4af5ff5094a65a62de9db64bf2.zip"
},
"title": {
"value": "BrainOOD: Out-of-distribution Generalizable Brain Network Analysis"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3xxxoh92Mo | Highlight: Learning Visual Prompts for Vision-Language Models | main | Active | VLMs;prompting;visual prompting;self-supervision | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 3;5;5;5 | 5;4;5;4 | 3;3;2;3 | 1;3;2;2 | 2;3;2;3 | 4.5 | 4.5 | 2.75 | 2 | 2.5 | -0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see above."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is easy to read and follow.\n2. The authors have a nice experimental framework that starts from manual annotations and builds up to a \"unsupervised approach\".\n3. The performance gains over previous methods are non trivial."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper improves upon methods that add a manual highlight over image to improve clip performance. The authors optimize a learnable visual prompt and show that they can start from manual annotations to eventually propose the final method which relies on image crops from a pretrained object detector. The authors show that their formulation improves upon previous methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The authors have not validated their initial hypothesis that different models will have different biases. Table 6 compares several CLIP models but I am more interested to know if there is any difference in this behavior based on pretraining objective and dataset. How would this compare for SigLIP, SILC, EVA-CLIP, MetaCLIP etc and models trained on OpenAI dataset, Webli, datacomp, etc.\n2. How does the method compare with different localization methods. The authors have used MAttNet. Is there data leakage in the localization network? Is it possible to use a generic model here?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1.\tThis is in reference to the point 1 in the weaknesses. The apples-to-apples comparison (unsupervised regime) of the proposed approach with the existing ones can not cement the proposed approach’s betterness. Any comment on this would be good to have.\n2.\tComparison with the closest paper Xie et al. (2024a) is missing and in the same level playing ground (without using manual supervision to train) Xie et al. (2024a) works better. Any comment on this would be good to have.\n3.\tThis is related to the third point in the weaknesses. Is it possible to comment on the missing experiments mentioned there?\n4.\tThe strong unsupported claims as detailed in point 4 in ‘weaknesses’ needs to support.\n5.\tIn Table 1, the results are impressive. However, I would like to see, for completeness, what would be the performance of the proposed method if it is measured in the same level playing field i.e., it is unsupervised and uses ensemble of the same backbones as the rest of the competing methods.\n6.\tI am not comfortable with categorizing methods like RedCircle with sup=x. Theres a fine line between an 'unsupervised' approach and a zero-shot approach. RedCircle can best be described as zero-shot and not 'sup=x' (which means it gets training but no supervision, which is not true for redcircle as it does not get any training).\n7.\tIn figure 2, When both are vision encoders, what encoders are you using? Are the two vision encoders the same in such a case?\n8.\tThe next question is related to eqn. (2). What does this symbol a cross inside circle mean? What is the relation between the two $\\mathcal{F}(v)$'s and $I(v)$? What is the difference between upper case I and lower case i in eqn. (2)?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.\tThe paper presents a learnable way to predict both shape and color of a visual marker such that regions bounded by such markers in images can correspond to textual descriptions.\n2.\tThe discussion on the relevant works is good.\n3.\tThe use of image crops and synthetic captions for unsupervised training is interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper works on a large Vision-Language Model (VLM), especially CLIP. It learns to generate markings on images so that a textual expression related to the image has high correspondence with the image region inside the marking. In short, the authors address the Referring Expression Comprehension (REC) problem using VLMs. While the problem of REC using VLMs like CLIP is not new, unlike previous studies, this paper does not assume the existence of predefined markers like red circle, square or arrow etc. Rather, the authors propose to learn the particular form of markers in both supervised and unsupervised manner. In supervised approach an infoNCE loss is minimized between annotated regions with expressions and the network generated regions (called highlights). In unsupervised approach, the authors use synthetic captions instead of manually annotated descriptions of the crops. Experiments show comparisons with related works that does not use supervision but ensemble of backbones in CLIP."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe novelty of the approach is limited. Let us take the example of the first work in this line which is RedCircle (Shtedritski et al., 2023). The authors there have done extensive analysis on drawing different types of visual prompting e.g., circle, square, arrow etc. with different colors on images and then processing it and the text query via a pretrained VLM. This paper concludes and shows that a red circle works best for REC with CLIP. One may argue that redcircle is dependent on manually tuning the markers for their shapes and colors. Manual tuning has already been replaced by learning to tune the visual markers in Xie et al. (2024a). Xie et al. also did not use supervision for training. The proposed work, on the other hand works good only when supervised training is used (compare row ‘Highlight (unsupervised)’ and ‘RedCircle’ rows as well as ‘Highlight (Supervised)’ and ‘RedCircle’ rows in table 2). The fact that ‘Highlight’ needs to use manual annotation from reference comprehension datasets to perform well, adds to novelty but takes away the major strength and flexibility of VLMs – i.e., zero-shot task transfer. The other incremental novelty of using synthetic texts does not perform well compared to the state-of-the arts.\n2.\tThe closest work to the proposed approach is Xie et al. (2024a) in flavor as this paper also learns to draw a visual marker in zero-shot manner. However, this has not been compared with in the zero-shot setting.\n3.\tThe proposed approach shows its usefulness only in one of the three tasks RedCircle (Shtedritski et al., 2023) evaluated themselves on. How does it perform on the other two tasks namely – ‘Naming Keypoints’ and ‘Keypoint Localization’ is missing.\n4.\tThere are a few strong statements that is not supported well. For example, Line 047 says - alt texts in pretraining image-text datasets summarise global image descriptions. This is in contrast to the common wisdom as alt-texts are usually very short and sometimes cryptic descriptions of a scene. Is there any reference or some examples that you can point to? Similarly, Line 60 says that whether a red circle is the prompt that best elicits these emergent behaviors in VLMs is unclear. However, in Shtedritski et al. (2023), there has been extensive studies on different types of markers e.g., circle, arrow, square, cross etc. or different colors of them. Table 2 in the main paper and table 10 and figure 10 in the supplementary clearly shows red circle works best.\n5.\tMinor typos: Line 63: ‘unfeasible’ -> ‘infeasible’; Line 261: ‘bounding’ -> ‘bounding box’."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "as shown in Weaknesses"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "2. [Strengths]\n- a. It introduces Highlight, a method to automatically learn a visual prompt that highlights regions for a VLM.\n- b. The experiments are extensive, and the performance of the proposed method is promising.\n- c. The method is flexible, which can work in both supervised and unsupervised cases."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "1. [Summary]\nThis paper focuses on visual prompt learning for vision-language models. It introduces Highlight, a method to automatically learn a visual prompt that highlights regions for a VLM. The proposed method could work in both supervised and unsupervised manner. Experiments show that the proposed method achieves good performances."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "3. [Weaknesses] \n1. What is the different between this paper and the other popular visual prompt learning like [a, b]. [a, b] learn tokens as visual prompts. It would be better to discuss and clarify the similarity and difference between this paper and [a, b] and include the discussion in Section Related Work.\n[a] Visual Prompt Tuning.\n[b] Progressive visual prompt learning with contrastive feature re-formation.\n\n2. The core of this paper is to learn visual prompts for vision-language models. Have the authors tested the proposed method on other VLMs in addition to CLIP? Besides, the Section Related Work lacks a paragraph for introducing VLM. It would be better to add a new paragraph to introduce related work and recent advances in vision-language models, and could also cite the VLM survey [c] as the reference for the readers to know more about VLMs.\n[c] Vision-Language Models for Vision Tasks: A Survey\n3. Another question is that whether the learnt visual prompts are independent or dependent to input images? They are the same for all input images? Or works differently for different input images?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. I noticed the experiments were only conducted on CLIP models. Have you considered testing the method on other vision-language models? If not, what were the main challenges preventing such evaluations?\n\n2. From Figure 1, I can see that your method introduces some blur to the original images. Could you clarify:\n * Does this blur affect the model's performance?\n * Is this blur necessary for the method to work effectively?\n\n3. Your results show that learned prompts perform better than manual ones, but the mechanism behind this improvement is not clearly explained. Could you:\n * Provide more analysis on why learned prompts work better?\n * Share some failure cases to help understand the method's limitations?\n\n4. I'm interested in the robustness of your method:\n * How stable is the performance across different runs?\n * How sensitive is it to hyperparameter choices?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.The paper is clear and easy to follow.\n\n2.The research goal is meaningful. They propose to automatically learn visual prompts instead of designing them manually, which is a clever solution to a common problem in vision-language models.\n\n3. Their experimental results are impressive. The method works well across different versions of CLIP models and is more efficient than ensemble methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "1.The authors present Highlight, a method that learns visual prompts for Vision-Language Models (VLMs) to improve their ability to localize specific image regions. The key innovation is automatically learning both the shape and color of visual markers in a differentiable manner, rather than relying on manually designed prompts like red circles. \n\n2.The method can be trained either with supervision from text-image region pairs, or without supervision using synthetic captions or images alone. \n\n3.The authors evaluate their approach extensively on RefCOCO datasets using different CLIP models, demonstrating that Highlight outperforms existing visual prompting methods and even ensemble approaches while being more computationally efficient."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.Robustness and Generalizability\n\n- The experiments are conducted exclusively on CLIP variants\n- I would like to see evaluations on other popular VLMs (e.g., Flamingo, GPT-4V)\n- Without broader validation, it's difficult to assess if this method is truly model-agnostic\n- I strongly suggest including at least 2-3 different families of VLMs in the evaluation\n\n2. Theoretical Understanding\n- The paper lacks insights into why learned prompts outperform manual ones\n- The visualizations don't effectively explain the mechanism's advantages\n- I recommend: \n(1) Providing case studies of success and failure cases\n(2) Including analysis of the learned prompt patterns\n\n3. Image Quality Impact\n- Looking at Figure 1, I'm concerned about the image blur introduced by the method\n- The authors should address: \n(1) How this blur affects the model's performance\n(2) If this is a limitation for certain types of images or tasks"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024highlight,\ntitle={Highlight: Learning Visual Prompts for Vision-Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3xxxoh92Mo},\nnote={under review}\n}"
},
"abstract": {
"value": "Large-scale Vision-Language Models, such as CLIP, demonstrate impressive capabilities and have multiple applications, from text-to-image generation to zero-shot classification. Recent work has suggested that visual prompts, such as a red circle, can steer the vision encoder to the circled region. While such vision prompts have now been used in various applications, they might be model-specific and depend on the model learning these behaviours from its training data. Discovering and evaluating various prompts might not be feasible given different models, tasks, and datasets. In this paper, we propose Highlight, a method to learn a visual prompt that highlights a region in an image or refines a manually engineered visual prompt. Using our framework, we can learn to highlight in a supervised way using a dataset of text-image region pairs or in an unsupervised way using synthetic captions or images only. Highlight outperforms other visual prompts, prompt learning approaches, and compute-intensive methods that use ensembles of multiple models and visual prompts."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"VLMs",
"prompting",
"visual prompting",
"self-supervision"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/ba2409ce1284cc058ce3575382d33e00fda787b3.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Highlight: Learning Visual Prompts for Vision-Language Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3ygfMPLv0P | Tailoring Mixup to Data for Calibration | main | Active | mixup;calibration;confidence;robustness | alignment, fairness, safety, privacy, and societal considerations | 3;5;6;8 | 2;3;3;3 | 2;2;3;4 | 2;2;3;3 | 1;3;3;4 | 5.5 | 2.75 | 2.75 | 2.5 | 2.75 | 0.800641 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "**Question 1.** Why did not SK RegMixup be applied to ViT in Table 4? Is there any potential challenges or limitations of applying SK RegMixup to ViT architectures?\n\n**Question 2.** [line 976-978] In the proof of the first part in Theorem 3.1., whatever the value of $\\lambda\\_{1}$ is, $\\lambda$ can be 0. When $\\lambda=0$, then $\\tilde{\\mathbf{x}}(\\lambda)=\\mathbf{x}\\_{l} \\in \\mathcal{M}\\_{j}$, not $\\mathcal{M}\\_{i}$, and so does symmetrical case. In my opinion, it would be $\\forall \\lambda \\geq \\lambda\\_{1},~\\tilde{\\mathbf{x}}(\\lambda)=\\lambda \\mathbf{x}\\_{k}+(1-\\lambda)\\mathbf{x}\\_{l} \\in \\mathcal{M}\\_{j}$. The authors are kindly requested to examine and fix them if there is an error.\n\n---\n\n**Things to improve the paper that did not impact the score:**\n\n* Figure 4 caption: there is no result about *Circles* toy datasets. The mention of it from the caption would be removed if it's not intended to be included\n\n* [line 442] The ECE and AECE of MIT-A in ImageNet-R would be in bold. The authors have to double-check their result highlighting in Table 4 to ensure consistency and accuracy.\n\n* In Table 4, is there any reason why ECE and AECE are exactly same in OOD settings? The authors are kindly requested to explain why ECE and AECE are identical in OOD settings or, if this is unexpected, verify whether there might be an error in the reporting or calculation of these metrics.\n\n* The experimental results show that the SK RegMixup is an effective method. However, Table 6 does not provide the efficiency of the SK RegMixup in terms of computational cost. It is recommended to include computational cost metrics for SK RegMixup in Table 6, similar to what they've provided for other methods.\n\n* [line 879] Typo: cccross"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "* The proposed approach was well motivated by targeting the appropriate research problem of calibration-driven Mixup methods in classification and regression: Manifold mismatch. Furthermore, this problem was theoretically defined and proved with the supplement materials.\n\n* The proposed method was in a clear and well-organized manner, and the proof was provided in a manner that supported its conclusions.\n\n* The research problem was validated through experimental results; a comparison was made with the baseline.\n\n* Comprehensive experiments in classification and regression, with a particular focus on performance and computational cost, demonstrated the efficacy of the proposed approach."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a novel Mixup framework that employs a Similarity Kernel (SK) called SK Mixup to achieve a stronger interpolation between similar points while reducing interpolation otherwise. As a motivation of this study, the authors defined the concept of manifold mismatch, which can negatively impact the calibration of confidence in Mixup. They conducted experimental validation to assess the impact of this phenomenon on the distance between points to mix. Following the presentation of SK Mixup, the effectiveness of the proposed approach in alleviating the manifold mismatch was demonstrated through extensive experiments in classification and regression. Nevertheless, the extension to more complex tasks and theoretical analysis on the regularization effect of SK Mixup still need to be undertaken -- This is why the authors did not receive the highest score in *Contribution*."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There is no weakness in the paper that would justify its rejection."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "None"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- In Eq. (3), there is a -1 inside the exponential. Why not absorb it into $\\tau_{max}$? Are there any reasons why it is presented that way? \n- In the classification scenario where the data samples (but not the labels) are noisy and corrupted, thus blurring the true distance between the samples used in mixing, how would the proposed method perform?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Overall the paper is well written and is easy to follow\n- An innovative yet efficient method is proposed to improve mixup so that manifold mismatch likelihood is reduced, and better calibration and accuracies can be achieved\n- Adequate empirical results on both classification and regression tasks are provided to demonstrate the proposed method"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper first demonstrates that distance between the data used in mixup can impact the likelihood of manifold mismatch (a phenomenon where mixed samples lie outside of the class manifolds of the original data used for mixing). It then proposes an efficient framework to mitigate the occurrence of manifold mismatch. The key idea is to dynamically change the distributions of mixing coefficients via a similarity kernel that measures the distance between the mixed samples. Empirical results are provided to demonstrate the effectiveness of the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- While Theorem 1 shows the existence of manifold mismatch, it does not tell us anything about the assumption (which is used in the paper) that the higher the distance between two points, the more likely their convex combination will fall outside of the original manifolds, and thus, the more likely a model would assign a different label than the original labels of the two points. Some theoretical results in establishing the validity of such assumption would further strengthen the paper. \n- There are some missing papers that could be worth mentioning in the related work. In particular, mixup related works such as [1,2,3]. \n- For the experiments, it is not clear how the proposed method would compare with the method proposed by [1,2,3], and also how would it perform for more complex datasets like ImageNet, ImageNet-C, ImageNet-R and ImageNet-P.\n\n---\n\n[1] Dan Hendrycks, Norman Mu, Ekin D. Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. AugMix: A simple data processing method to improve robustness and uncertainty. Proceedings of the ICLR, 2020\n\n[2] Soon Hoe Lim, N. Benjamin Erichson, Francisco Utrera, Winnie Xu, and Michael W. Mahoney. Noisy feature mixup. Proceedings of the ICLR, 2022.\n\n[3] Erichson, N. Benjamin, Soon Hoe Lim, Francisco Utrera, Winnie Xu, Ziang Cao, and Michael W. Mahoney. Noisymix: Boosting robustness by combining data augmentations, stability training, and noise injections. Proceedings of the AISTATS 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. Line 011 or 012 on page 1. \"Along with improved performance\", what performance, please make it specified\n2. Line 012 or 013 on page 1. \"improving calibration and predictive confidence\", from my understanding they are the same thing, no need to use \"and\" here in my opinion.\n3. Line 014. Infact in this line the paper has mentioned \"calibration of confidence\", what is calibration of \"confidence\", again, in my opinion it's just calibration.\n4. Line 019 to 020. Again, please specify what \"performance\" it is when mentioning \"improve performance\".\n5. Line 054. Here the paper mentions \"label noise\", can the authors provide a brief description of it like the way they have described manifold intrusion in previous text?\n6. Line 140. Again, please specify the term \"performance\".\n7. Is there and related research works about calibration-driven Mixup methods in regression tasks?\n8. Line 161. If possible it's much better to turn this title to the next page, it would look better.\n9. Line 162 to 169. The notation preliminaries in this paragraph feel a bit too simple and careless. For example, when defining a dataset, one may want to first define a data space and label space, indicating that these spaces are subspaces of some vector spaces of certain dimensions, and then indicate that the training dataset is of some size with the examples being drawn from some data distribution over the data space, etcs.. Putting those details in one single line feels careless. Also, what is $M$? Many would of course just simply take it as the number of classes in the classification problems, but it also needs to be specified in the paper. \n10. Line 162 to 169. There is also some notations that are inconsistent. The paper first indicates that the labels are $M$-dim vectors, then what is the input and output dimension of the encoder $h_\\phi$? The way it is presented make the model output $f_\\theta(\\text{x})$ feels like a scalar, while it should be a vector the same dimension as the labels since $\\hat{y}:=f_\\theta(\\text{x})$.\n11. Line 193 to 194. \"bounded support $\\mathcal{M}_m\\subset\\mathcal{H}$\". Is this also how the manifolds are defined in this paper?\n12. Line 194 to 195. To assume that classes are separated, it means all samples belonging to \"different classes manifolds\" should disjoint, is that correct?\n13. Line 208. Why would the mixed points' belonging \"to no vlass manifold at all\" be a bad thing? In my opinion, when using Mixup for training, especially when the data dimension is high, it's actually mostly the case that the mixed point fall into the \"void\" areas in the entire data space.\n14. Line 212. \"the higher the distance between two points, the more likely their convex combination will fall outside of the original manifolds\". Here is a counter example. Suppose we have two classes of points in $\\mathbb{R}^2$. Suppose their manifolds are two separated line segments on the horizontal axis. If we combine the rightest point from the left manifold and the leftest point from the right manifold, the mixed points will almost all fall into the middle area which is outside both the original manifolds. If we combine the leftest points from both manifold, the fact is, a much bigger proportion of mixed points would fall inside the left manifold. But, the distance between the points in the second case is much larger than that in the first case.\n15. Line 215. \"Using a Resnet18 trained ...\", is it trained using ERM or Mixup?\n16. How to choose $\\tau_{std}$ and $\\tau_{max}$?\n17. How does SK Mixup help improve calibration? The derivation of the algorithm only suggests that SK Mixup can help mitigate manifold mismatch, but is manifold mismatch the necessary reason, or even the real reason, of bad calibration?\n18. In the process of trading off between diversity and uncertainty, how would calibration and generalization behave? Will they behave like a up-side-down U curve such that there is a sweet spot?\n19. The page number of page 7, and the header \"under review\" line in page 8, there are some strange hyperlinks around the text.\n20. Theorem 3.1 (i). \"$\\lambda\\in]\\lambda_1,\\lambda_2[$\", it should be $[\\lambda_1,\\lambda_2]$ right?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. It's a novel idea to summarize several observed phenomenons or properties of Mixup into one unified terminology: manifold mismatch\n2. The correlationship between data points pair distance and the occurence of manifold mismatch is carefully derived.\n3. The proposed algorithm is meant to balance the diversity and the uncertainty of the virtual examples in Mixup, which is meaningful and helpful"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper has come up with a concept called manifold mismatch, which is a phenomenon in Mixup that can harm the generalization and calibration performance of the trained models. The paper also show that such a manifold mismatch behavior is correlated to the distance between the data points being paired and mixed in Mixup. Then, by applying a similarity kernel, the paper proposes a variant training algorithm of Mixup that can dynamically adjust the mixing coefficient depending on the distance of the data points being mixed. Finally, the authors have empirically verified the effectiveness of the proposed algorithm on various models and tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Poor presentation\n2. Doesn't clarify the connection between manifold mismatch and calibration. \n3. The core claim that distance impacts manifold mismatch is too simple to be seen, and it's actually under-justified since there seems to be much more necessary conditions of manifold mismatch like data distribution, structure of learned features and manifolds, etc.. In fact, the idea of improving Mixup by forcing more mixtures between closer points is not novel, like k-mixup [1]\n4. The idea of dynamically adjust the mixing coeffecient is also not new, like AdaMixup [2].\n\n[1] https://arxiv.org/abs/2106.02933\n\n[2] https://arxiv.org/abs/1809.02499"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Instead of using a similarity kernel introduced in this paper, a simple implementation is to divide sample pairs based on their distances (like the case in Figure 2) and use two different Beta distributions to sample the interpolation weights. How will this baseline perform compared to the proposed method? Such comparison can further verify the motivation and support the proposed method. \n- While the authors have mentioned that non-linear interpolation methods have several disadvantages (larger computational cost, limited application) in section 2.1, some empirical comparison on their performance for image data sets in Table 2-4 should still be necessary. In addition, the authors may also include some of these interpolation methods in Table 6 to verify that their computational cost are muchh larger compared to linear mixup methods. \n- Moreover, can the proposed method be combined with these non-linear interpolation methods as well? I suppose such combination will be straight-forward, as we only need to change the sampling distribution of interpolation weights for different sample pairs. Some discussion (possibly with some empirical results) will be welcome here. \n- I am a bit puzzled by the performance of different methods on ImageNet-A in Table 4. None of them has an accuracy higher than 3%, which seems worse than most methods reported in [1]. Is this due to some discrepancies in experimental setup (e.g., less training epochs)? If so, the authors may need to clarify that such discrepancies are not in favor of their proposed method. \n- Also, how are the baseline methods chosen in Table 6? From my perspective, I suppose we need to compare the time cost of SK Mixup against Mixup or some other baseline methods with rather good performance. Nevertheless, it seems either of the above cases applies, and some explanation may be needed here. \n\n## References\n[1] On Feature Normalization and Data Augmentation. CVPR 2021"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The proposed method is clearly introduced and easy to understand"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes to improve different mixup methods by adjusting the probability distribution to sample interpolation weights according to the distance between a given sample pair. Using the proposed method, sample pairs that are far from each other are less likely to be mixed. Empirical results on different tasks and data sets demonstrate the effectiveness of the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The motivation of proposed method can be strengthened\n- Some baseline methods seem missing, which can make the empirical comparison not supportive enough"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We show that taking distance into account in mixup reduces occurence of mismatch between mixed labels and mixed samples, improving confidence, calibration and robustness."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024tailoring,\ntitle={Tailoring Mixup to Data for Calibration},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3ygfMPLv0P},\nnote={under review}\n}"
},
"abstract": {
"value": "Among all data augmentation techniques proposed so far, linear interpolation of training samples, also called Mixup, has found to be effective for a large panel of applications.\n Along with improved performance, Mixup is also a good technique for improving calibration and predictive confidence.\n However, mixing data carelessly can lead to manifold mismatch, i.e., synthetic data lying outside original class manifolds, which can deteriorate calibration of confidence.\n In this work, we show that the likelihood of manifold mismatch increases with the distance between data to mix.\n To this end, we propose to dynamically change the underlying distributions of interpolation coefficients depending on the similarity between samples to mix, and define a flexible framework to do so without losing in diversity. We provide extensive experiments for classification and regression tasks, showing that our proposed method improves performance and calibration of models, while being much more efficient."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"mixup",
"calibration",
"confidence",
"robustness"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/7540be5feee1e51a57ac516850038d494e500c62.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/c054a74fb185771b89711955ece573f6b749ad94.zip"
},
"title": {
"value": "Tailoring Mixup to Data for Calibration"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3ylNuZXtMg | Activations Aren't Cheap in LoRA, Weights Are | main | Active | PEFT;LoRA;finetuning;LLM;memory efficiency;diffusion | infrastructure, software libraries, hardware, systems, etc. | 3;3;5;6 | 4;3;3;4 | 3;2;1;3 | 2;2;2;2 | 2;3;2;3 | 4.25 | 3.5 | 2.25 | 2 | 2.5 | 0.19245 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- What was the optimizer used for these experiments? I suspect this is not Adam, since the memory overhead of the optimizer states would heavily impact the memory requirements of full model fine-tuning. Can the authors reproduce results of full model fine-tuning, LoRa (both version) with the Adam optimizer?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper tackles an important topic: improving the memory efficiency of fine-tuning LLMs/LVMs\n- The proposed technique is simple and easy to understand\n- The paper is well-written\n- The experiments show that the proposed method does improve both latency and memory of LoRA fine-tuning"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a weight-based reformulation of LoRA to reduce the memory overhead of LoRA fine-tuning, which can be worse than that of full model fine-tuning in certain scenarios. The proposed reformulation is mathematically equivalent (assuming no dropout), and thus can offer both memory and latency savings for free."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper studies the extreme case of adding LoRA to all linear layers. In practice, LoRA layers are added to a select layers (typically attention layers only). How would the memory profiling look like under that setting? Do the current experiments also add LoRA to the final output embedding matrix?\n- The method only works when no extra transformations are applied to the LoRA hidden representation (e.g., dropout).\n- The datasets being used in Tables 2 and 4 are pointless, what is shown is profiling of a single batch, this could be done with randomized data."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1) Does this reformulation introduce any risks of numerical instability?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1) Improved Latency and Performance: The weight-based method reduces memory consumption and latency. This improvement is particularly valuable for users aiming to fine-tune models on consumer hardware with limited resources.\n\n2) Applicability to Other Fine-Tuning Methods: The reformulation is applicable to other parameter-efficient fine-tuning (PEFT) methods, not just LoRA."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper \"Activations Aren't Cheap in LoRA, Weights Are\" presents a method to address the issue of high memory consumption associated with activations in smaller large language models (LLMs) with extended context lengths. The authors propose a reformulation that focuses on manipulating model weights instead of activations during fine-tuning, aiming to reduce the memory overhead that grows with increasing context lengths. This weight-based approach is designed to offer memory savings and improved latency, particularly in scenarios where memory resources are constrained."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) Limited Novelty: The approach is primarily a reformulation, shifting operations from activations to weights. While practical, it doesn’t introduce new insights or innovative techniques.\n\n2) Limited Impact: The benefits of this work are mainly applied to smaller models with long context lengths on memory-constrained hardware, so the impact is somewhat narrow and may not generalize to larger models or different settings."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1-3. The primary concern is related to the weaknesses above. If these concerns are adequately answered, I am willing to consider increasing the score.\n\n4. Could you provide a detailed comparison of activation memory and weight memory, in addition to the maximum memory usage shown in Table 2 and Table 3?\n\n5. It would be beneficial to mention methods like gradient checkpointing [1] and activation compressed training [2-6] in the related works section. Additionally, experimental results comparing these methods would strengthen the work. If direct one-to-one comparison is challenging due to differences in scope, please clarify that these methods are orthogonal and provide experimental results that apply these techniques in combination with the proposed method. If time is limited, it would be helpful to demonstrate that the activation-based paradigm is more efficient than the weight-based paradigm when gradient checkpointing [1] and GACT [3] are applied.\n\n[1] Training Deep Nets with Sublinear Memory Cost, arxiv, 2016.\n[2] ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training, ICML, 2021.\n[3] GACT: Activation Compressed Training for Generic Network Architectures. ICML, 2022\n[4] Learning with Auxiliary Activation for Memory-Efficient Training, ICLR, 2023.\n[5] DropIT: Dropping Intermediate Tensors for Memory-Efficient DNN Training, ICLR, 2023.\n[6] ALAM: Averaged Low-Precision Activation for Memory-Efficient Training of Transformer Models, ICLR, 2024"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The simple idea improves the efficiency of activation memory in traditional PEFT methods (such as LoRA) and reduces running latency.\n\n2. The proposed idea can be easily applied to extended PEFT methods of LoRA, such as IA, VeRA, and LoReFT.\n\n3. Sharing the code implementing this idea enhances the credibility of the evaluation results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper highlights the increasing demand for memory as context lengths expand in transformers, which could impact fine-tuning on consumer GPUs. While LoRA has provided memory savings, its benefits may diminish with trends toward smaller models and longer contexts. The proposed weight-based reformulation, which merges the LoRA branch with the main path by combining BA into W, effectively reduces memory usage and latency across various Parameter-Efficient Fine-Tuning (PEFT) methods. Experiments show that this approach uses significantly less memory and time compared to the activation-based approach, as demonstrated on tasks such as language modeling, diffusion models, and using LLaMA-3.2 models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The activation-based and weight-based paradigms should be adopted depending on different scenarios. For example, the activation-based paradigm is useful when the activation size is large compared to the model size, but it becomes disadvantageous in the opposite case. Although the authors qualitatively mentioned this, they did not provide a quantitative analysis or experiments to determine in what range the proposed activation-based paradigm would be beneficial, which reduces the algorithm's practical utility in real-world applications.\n\n2. It is also questionable whether scenarios involving small models with large activations (such as large batch sizes or long sequence lengths) occur frequently enough to justify the activation-based paradigm's effectiveness. Generally, small LLM models are designed for edge devices, where limited memory makes it challenging to support long sequences. In such cases, the scenario would likely involve a small model with small activations, raising doubts about the activation-based paradigm's effectiveness in these instances.\n\n3. Upon reviewing the provided code, it appears challenging to apply dropout to the activation results generated after the weight-based reformulation. This may impact the fine-tuning results, suggesting a lack of evidence for the claim that the proposed method does not affect model performance, as no accuracy comparison experiments were conducted."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Can you clarify why the current conclusion can be extended to even longer sequence length like 128K? The current setup of experiments and analysis are not convicing enough to me. It will help a lot if you can offer evaluation results on longer sequence length and provide a breakdown of the \"activation-to-weight ratios\" in different components of LLMs.\n\n2. Can you justify why your method is better than full fine-tuning? Again, how the conclusion can be extended to longer sequence length?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The finetuning of LLMs for long sequence length is an important problem, and the proposed method is clear and reasonable."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a reformulation method to improve the LoRA and some PEFT techniques used in LLM finetuning. It reformulates the activation-based method to weight-based to reduce memory consuming and latency for longer sequence length and larger batch size."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The proposed reformulation is designed based on long sequence length situations, but the maximum sequence length of the datasets used in evaluation is even less than 8192. Datasets with longer sequence length are necessary to verify the clarification. Can you explain why the trends observed in that study would extend to even longer sequences like 128K?\n\n2. The estimation of activation-to-weight ratio is not fair enough. Computing of attention scores have a different pattern with linear layers, and results in quiet different ratio in LLMs. Can you provide a breakdown of the activation-to-weight ratios for different components of the model, including attention layers and feed-forward layers? \n\n3. The configuration of experiments is not clear. Which type and how many GPUs are used? Any distibuted training methods used? These configurations can significantly affect the performance. Can you give more experiments setup details and outline the hardware configuration, distributed training setup (if any), and any other relevant implementation details that could affect performance?\n\n4. From the evaluation results, the proposed reformulation does improve the original method but show little advantage over full fine-tuning on both memory and latency. Can you provide a more comprehensive comparison between your method and full fine-tuning, including accuracy metrics? Can you justify why your improved methods are better than full fine-tuning under different configuration of sequence length, batch size, and model size? \n\n5. Figure 1 lacks information. What’s the sequence length for left part and what’s the model size for right part? The explanation should have been in the caption instead of appendix. It’s also confusing that the markers in two parts are different. \n\n6. Caption of Figure 3 mismatches the figure. Should be \"top and bottom\" instead of \"left and right\"."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Reformulating PeFT methods as changes to weights and not activations saves a lot of memory in small models and long context situations."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024activations,\ntitle={Activations Aren't Cheap in Lo{RA}, Weights Are},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3ylNuZXtMg},\nnote={under review}\n}"
},
"abstract": {
"value": "LoRA has become the prevailing technique for finetuning large neural networks with limited computational resources. Historically, activations have been regarded as small and computationally inexpensive to manipulate—a view reflected by LoRA, which leverages this assumption and adds a low-rank term to intermediate activations. However, in the era of modern large language models (LLMs) and diffusion models, this notion has been challenged by the desire for increasing context lengths and smaller models, a trend which inevitably leads activations to consume more memory than the model weights themselves. Surprisingly, when finetuning a 1B model with a context length greater than 2048, we find that LoRA finetuning uses more memory than full-parameter finetuning. This study finds that manipulating additional model weights within the computation graph in parameter-efficient finetuning techniques can often be more memory-efficient than operating on the activations. We provide a semantically-equivalent computation graph reformulation for LoRA, and other popular PeFT techniques, which saves memory and trains faster, advancing the Pareto-frontier for finetuning tasks that can be achieved on consumer hardware. Under practical conditions, this reformulation provides up to a 1.4x reduction in max memory usage and latency for LoRA finetuning across various language and diffusion transformers."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"PEFT",
"LoRA",
"finetuning",
"LLM",
"memory efficiency",
"diffusion"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/1b78e04209b9e461737a23d820da7a4f8e6292f1.pdf"
},
"presentation": null,
"primary_area": {
"value": "infrastructure, software libraries, hardware, systems, etc."
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/d2c36ad4fa34505c5133c85fedb7dd95148a2919.zip"
},
"title": {
"value": "Activations Aren't Cheap in LoRA, Weights Are"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3zEKTw9fSB | Generative Parameter Efficient Fine-Tuning | main | Active | Parameter Efficient Fine-Tuning;Transfer Learning | transfer learning, meta learning, and lifelong learning | 3;5;5 | 3;4;3 | 2;3;3 | 2;3;2 | 2;1;2 | 4.333333 | 3.333333 | 2.666667 | 2.333333 | 1.666667 | 0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Could authors give some intuitions on why the changes proposed are beneficial?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "LoRA is now a widely adopted technique and improvements over it can make profound impacts."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a modification to the well-known lora method. Technically, as the original lora method can be formulated as $\\overline{W} = W + AB$, the proposed modification changes it to $\\overline{W} = W + WAB'$, with $B'$ shared across layers. The authors have discussed the relationship with ReFT, and used experiments to support the efficacy of the modifaction."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Unconvincing method design**\n\nThe biggest weakness to me is that the proposed method is not supported by reasonable and convincing motivations. Specifically, there are two major changes from the original LoRA:\n1. sharing half of the lora weights across layers;\n2. involving the original weight as an extra term into the weight delta.\n\nHowever, it is unclear why these two changes are useful and how do they work to benefit. After reading the paper I cannot get a satisfying answer. This problem fundamentally limits the value of the work, as it is less likely for people to give the proposed method a try without an intuition that makes people believe it would lead to better results.\n\n**The introduction of the simple method is unnecessarily complicated**\n\nAs mentioned in the summary part, the core of the proposed method is simply $\\overline{W} = W + AB$ -> $\\overline{W} = W + WAB'$, but the paper makes me feel that it is much more complicated.\n\n**Content organization can be better**\n\nThe first section is too long. As an introduction section, it involves too many details that are hard to fully understand before reading the methodology section. I suggest only preserving the high-level ideas in this section while moving the technical details elsewhere.\n\n**Others**\n\n1. tied-lora[1] also works on lora + weight-sharing, and I suggest to add some analysis & comparisons.\n2. I cannot find the information about the backbone model for visual experiments, welcome to correct me if I missed it.\n \n[1] Renduchintala, Adithya, Tugrul Konuk, and Oleksii Kuchaiev. \"Tied-lora: Enhacing parameter efficiency of lora with weight tying.\" arXiv preprint arXiv:2311.09578 (2023)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "GIFT represents a unique approach to generating fine-tuned weights directly from pretrained weights, sharing parameters across layers to enhance efficiency.\nExperiments demonstrate that GIFT outperforms existing PEFT methods on various natural language and computer vision tasks while using significantly fewer parameters, showing improvements in memory efficiency as well.\nTested on diverse tasks, GIFT shows effectiveness across commonsense reasoning, arithmetic, instruction following, and visual recognition tasks, reinforcing its versatility as a parameter-efficient fine-tuning approach."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces Generative Parameter-Efficient Fine-Tuning (GIFT), a method to fine-tune pretrained Transformer models with fewer parameters by generating fine-tuned weights from pretrained ones. They show this formulation can address the two questions about 1)an explicit and direct mapping between the fine-tuned model and the frozen pretrained model, 2) bridge parameter-efficient fine-tuning and representation fine-tuning. The proposed GIFT method is designed by implementing a lightweight structure of only two linear layers, shared across selected layers in the model. Using minimal linear layers without bias, GIFT achieves significant parameter reductions compared to LoRA and performs better across some NLP and computer vision tasks, obtaining a slightly higher win rate on instruction tuning than GPT 3.5."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Lack of some comparison settings on full finetuning: How is the comparison results on full finetuning in Table-2 and Table-3 for commonsense reasoning and arithmetic reasoning task?Table-1 compares the full finetuning setting on Llama-2 7B for instruction following task, but table-2 and table-3 didn't reveal this setting.\nPotential Scalability Concerns: Although parameter-efficient, the scalability of GIFT for larger models (beyond 8B parameters, like llama1-13B, llama3-65B) isn’t explicitly demonstrated. The experimental version of llama1-3 presented by the author is all less than or equal to 8B, leaving questions about its performance in high-scale deployment. \nLimited Ablation on Layer Selection and Configuration: GIFT’s performance maybe vary depending on which layers are selected for fine-tuning. While some experiments address this, there is minimal ablation on different layer selection and the comparison between Lora with the same layers.\nSome models compared are not newly advanced enough : Table-1 shows the result of fine-tuning Llama-2 7B with GIFT for instruction following task, but the compared gpt series is GPT 3.5 Turbo, a little bit outmoded. How is the comparison result of GIFT among some newly advanced models like gpt4o or others?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "See weaknesses.\n\n+) \n\n1. What did the authors want to say in Section 2.3? I think the idea behind this is similar to 2.2. What is the meaning of \"accumulate\" in the paragraph?\n\n2. Please provide more details regarding the experiment in Fig. 2. Specifically, clarify the meaning of the cluster. If the authors aim to demonstrate that GIFT enhances object highlighting within the attention modules, results should be compared with those of the pretrained model for a meaningful evaluation."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper provides new insights into fine-tuning techniques, suggesting that the proposed Generative Parameter-Efficient Fine-Tuning (GIFT). I think it can be viewed as a specific form of Representation Fine-Tuning (ReFT), where all tokens in the selected layers share the same re-parameterization parameters. Notably, GIFT is easier to implement than ReFT, utilizing two linear layers for weight re-parameterization without explicitly modifying token embeddings.\n\n2. The performance is impressive. They achieve similar or better performance with fewer parameters comparing to the other PEFT. Validation was conducted across multiple tasks, datasets, and models, demonstrating GIFT's effectiveness and versatility."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a new fine-tuning method called Generative Parameter-Efficient Fine-tuning (GIFT), which trains two linear layers to project the pre-trained weight matrices into fine-tuned weights. The authors argue that it offers a unifying perspective on PEFT and representation-efficient fine-tuning (ReFT) approaches by projecting pre-trained weights linearly. The results demonstrate that GIFT improves performance while using fewer parameters compared to previous parameter-efficient fine-tuning (PEFT) methods, such as LoRA, ReFT, and VeRA."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. In my opinion, the authors make claims about aspects that remain unexplained, which can confuse readers. The authors should provide further clarification to support these claims. For instance:\n\n(Line 051) \"but the learnable weight-residuals do not have direct information exchange with the pre-trained weights\"\n\n(Line 135) \"one of the simplest updates that minimally distorts and maximally preserves the pre-trained knowledge is defined by Eqn.1 and Eqn.2, thanks to the low-rank factorized linear projection in the parameter space.\"\n\n(Line 181) \"Additionally, adding fixed random matrices with learnable scales in PEFT makes the relationship between fine-tuned models and frozen pretrained models less intuitive.\"\n\n(Line 194) \"Furthermore, token-level interventions lack a holistic understanding of the relationship between ReFTed models and frozen pretrained models.\"\n\n2. There are issues with mathematical notation throughout the paper, making it difficult for readers to follow the ideas presented. Please refer to other PEFT papers (e.g., ReFT, LoRA) for guidance on presenting mathematical concepts clearly. For example, matrices should be represented in bold uppercase, vectors in bold lowercase, and scalars in italic lowercase. Additionally, in machine learning literature, symbols like θ (theta) and φ (phi) are conventionally used to denote model parameters, not calculations within the model.\n\n3. The content has redundancies; for instance, the Related Work section and Section 2.1 cover similar material, leading to overlap."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Generative Parameter Efficient Fine-Tuning (GIFT) presents a method to learn explicit, linear mapping between pretrained and fine-tuned models, and outperforms prior methods with ~15 times fewer parameters"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024generative,\ntitle={Generative Parameter Efficient Fine-Tuning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3zEKTw9fSB},\nnote={under review}\n}"
},
"abstract": {
"value": "We present Generative Parameter-Efficient Fine-Tuning (GIFT) for adapting pretrained Transformer backbones on downstream tasks. GIFT learns to generate the fine-tuned weights for a layer directly from its pretrained weights. The GIFT network is parameterized in a minimally-simple way by two linear layers (without bias terms), and is shared by different pretrained layers selected for fine-tuning (e.g., the Query layers), which result in significantly fewer trainable parameters compared to the layer-specific methods like Low-Rank Adapter (LoRA). We also show this formulation bridges parameter-efficient fine-tuning and representation fine-tuning. We perform comprehensive experiments on natural language tasks (commonsense and arithmetic reasoning, instruction tuning, and sequence classification) and computer vision tasks (fine-grained classification). We obtain the best performance and parameter efficiency among baselines on commonsense and arithmetic reasoning, and instruction following using the Llama family of models and on visual recognition benchmarks using Vision Transformers. Notably, compared to LoRA, we obtain 5.7% absolute increase in average accuracy with 15 times reduction of parameters on Commonsense170k using Llama-3 (8B), and 5.9% absolute increase in the win rate with 4 times reduction of parameters using Llama-2 (7B) during instruction tuning. Our GIFT also obtains a slightly higher win rate on instruction tuning than GPT 3.5 (Turbo 1106)."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Parameter Efficient Fine-Tuning",
"Transfer Learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/9b184d44658f6f4f91ad1eec5a70627ccb75e060.pdf"
},
"presentation": null,
"primary_area": {
"value": "transfer learning, meta learning, and lifelong learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/820afbceab1e7b3391c5e5cdd8b25ae2971e0a2a.zip"
},
"title": {
"value": "Generative Parameter Efficient Fine-Tuning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3zWvZv9xFh | Receptor-Specific Diffusion Model: Towards Generating Protein-Protein Structures with Customized Perturbing and Sampling | main | Active | protein structure prediction;diffusion model;graph neural network | applications to physical sciences (physics, chemistry, biology, etc.) | 3;3;3;5 | 3;4;3;4 | 2;2;2;3 | 2;2;2;3 | 3;3;1;3 | 3.5 | 3.5 | 2.25 | 2.25 | 2.5 | 0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "No ethics concerns."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "See the weaknesses above."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The motivation behind this work is clear and robust. Receptor-specific diffusion processes incorporate more informative prior knowledge for modeling binding structures, with the center of binding site atoms serving as an effective indicator for binding. \n\nThe proposed method outperforms all baseline models across various metrics on the DB5.5 dataset. Additionally, it boasts impressive inference speed."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addressed challenges in protein-ligand structure design facilitated by generative models, specifically targeting inefficiencies and generalizations in existing diffusion-based approaches. The authors proposed the Receptor-Specific Diffusion Model (RSDM), which introduces a novel method of customized perturbing and sampling to more accurately generate ligands tailored to specific receptors. RSDM uses receptor-specific information to adjust the sampling distribution, altering noise for customized perturbations, and employs a stepwise denoising schedule to refine ligand generation. Experimental results demonstrate that RSDM is highly competitive with leading models like ElliDock and DiffDock-PP, while also offering faster inference speeds. This positions RSDM as a promising tool for reliable and efficient protein-ligand generation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The presentation of this work could be improved as it contains some typographical errors; for instance, 'sets' on line 513 may be a typo. \n\nThe method itself is relatively straightforward, with similar approaches employed in DecompDiff [1].\nInformative priors are used to refine the diffusion and reverse processes, enhancing the quality of generated samples, although some important references are missing.\n\nFurthermore, since the binding site is unknown during inference, it would be beneficial to investigate how the quality of the predicted binding site affects performance. \n\nThe proposed method underperforms on antibody structure prediction compared to several baselines in terms of IRMSD. \n\nThe generalizability of this approach could be further validated by extending the framework to protein-ligand (small molecule) complex structure prediction tasks.\n\nReference:\n\n[1] Guan, J., Zhou, X., Yang, Y., Bao, Y., Peng, J., Ma, J., Liu, Q., Wang, L. and Gu, Q., 2024. DecompDiff: diffusion models with decomposed priors for structure-based drug design. ICML 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. To clarify the utility of Personalized Sampling Distribution and Step-by-step Data Purification, it would be helpful if the authors could provide more ablation studies detailing the effect on CRMSD, IRMSD, and computational efficiency. \n2. The description of the model design is ambiguous and unclear. For example, how does the model handle residue padding when modeling 14 atoms per residue as defined in Section 4.1? More details on implementation would improve reproducibility and readers' understanding of the model design.\n3. The baselines are all docking methods. A comparison with generative models, such as RFdiffusion, would provide a more comprehensive evaluation."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- RSDM achieves strong performance and inference efficiency compared to methods without searching\n- The presentation is mostly clear and the method is easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on receptor-ligand binding structure design, introducing personalized sampling distribution and step-by-step data purification in diffusion model to incorporate receptor-specific information. The author address the limitations of previous methods that overlooking the structural and chemical differences between receptors and applying a uniform noise distribution across all receptor types. Specifically, the mean of prior distribution in diffusion process is shited to the mean of the corresponding receptor, and the influence of this information is diminished during the sampling process with a predefined schedule. The author demonstrates that RSDM achieves strong performance among methods without searching, and competitive inference time compared to search-based methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The idea of shifting the mean in the personalized sampling distribution has been explored in prior works, such as DecompDiff [1], potentially making the technical contribution a bit weak.\n- Missing important baselines, e.g., RFdiffusion [2].\n\n[1] Guan, Jiaqi, et al. \"DecompDiff: diffusion models with decomposed priors for structure-based drug design.\" arXiv preprint arXiv:2403.07902 (2024).\n\n[2] Watson, Joseph L., et al. \"De novo design of protein structure and function with RFdiffusion.\" Nature 620.7976 (2023): 1089-1100."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I would like to learn from the author's perspective from the weaknesses discussed above, thanks!"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Thanks for the work. The paper is well written with nice figures."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a receptor-specific diffusion model tailored for the protein-protein docking task. The authors inject the mean position of the receptor pocket into the ligand's prior distribution to improve RMSD for sampled ligand positions. Additionally, the approach includes a network that directly predicts $x_t$ in the diffusion sampling process for this application. Two distinct loss functions are employed for coordinates and structure, enhancing the diffusion model's parameterization. Finally, the model's performance is evaluated against other protein-protein docking baselines using CRMSD, IRMSD, and DockQ metrics."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The key aspects of the diffusion model method are questionable and confusing:\n\n1, For the “receptor-specific” section:\nThe paper achieves this \"receptor\" specification by adding the mean position of the binding pocket to the mean of the Gaussian in the prior distribution and gradually decreasing its magnitude during sampling. To me, this approach seems like it is steering the center of the sampled ligand protein to remain in the pocket center. However, in most molecular protein docking and protein-protein docking problems, this can be done by simply removing the Center of Mass (CoM) for the ligand during sampling, which makes it confusing why it is necessary to complicate the process by adding this mean into the prior distribution and modifying the training and sampling processes. Also, Figure 2 shows that incorporating a personalized mean into the sampling distribution reduces the RMSD of alpha carbon considerably. However, I did not see any CoM removal in the training algorithm, so the difference between RSDM and w/o PSD groups in RMSD could simply be because the mean of the ligand is not re-centered.\n\n2, For “step-by-step data purification”:\nIn Equation 8, $\\mathcal{N}(x_{t-1}|\\mu_{\\theta}(x_t^{(l)},t)-\\frac{\\gamma_t}{T}x^{(r)},\\Sigma_{\\theta}(x_t^{(l)},t))$ indicates that $x_{t-1}$ is sampled from this parameterized normal distribution. However, prior to this equation, the authors mention that $x_{t-1}$ is directly predicted by $f_{\\theta}(x_t^{(l)},t)$. These two expressions on how to obtain $x_{t-1}$ are contradictory; if you directly predict $x_{t-1}$ from the previous time step, how could it be equivalent to sampling from a parameterized distribution as described in Equation 8? There is no sampling involved if you are directly predicting. I would appreciate if the authors could clarify whether they used sampling to obtain $x_{t-1}$ or if it is deterministically obtained by network prediction.\n\nFurthermore, the authors introduce this “step-by-step data purification” by starting with the criticism, “Such the reverse process (predict $x_0$ with the score network) poses a challenge to the model’s predictive ability and complicates the training process.” I don’t see why this challenges the model’s predictive ability. In diffusion or score-based models, predicting $x_0$, $\\epsilon_t$, or $v_t = \\alpha_t x_0 + \\sigma_t \\epsilon$ are three commonly used spaces to parameterize diffusion models. If you directly predict $x_{t-1}$ from $x_{t}$, then your network is no longer directly or indirectly parameterizing the score $\\nabla_{x_t} \\log P(x_t)$; it is directly predicting a sample instead of the gradient of the probability distribution, so it is no longer a score-based or diffusion model. \n\nTaking a step back, regardless of the confusion discussed above, even if the paper claims that directly predicting $x_t$ is better than $x_0$, I did not see any ablation study showing that predicting $x_t$ improves performance over predicting $x_0$, given the authors' criticism that predicting $x_0$ challenges the model’s predictive ability."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- In Section 5.4 (Ablation Studies), the authors evaluate model variations primarily through the DockQ metric. It would be better to access the model's performance if the ablation analysis can be extended to include additional performance metrics, such as CRMSD and IRMSD."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- RSDM’s introduction of receptor-specific perturbation and sampling in diffusion models is novel, as it directly incorporates receptor-guided noise, enhancing ligand-receptor specificity. \n- The model is well-validated on established benchmarks, DB5.5 and SAbDab, and is compared against multiple competitive baselines. The thorough experimental setup, including metrics such as CRMSD, IRMSD, and DockQ, supports the robustness of RSDM and clearly illustrates its advantages in both accuracy and efficiency.\n- The paper is clearly structured, with each component of the model explained in detail. \n- RSDM’s ability to reduce computational demands while maintaining accuracy could greatly benefit high-throughput docking scenarios, enabling faster and more customized ligand generation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents RSDM, designed to generate protein-ligand structures with receptor-specific properties. Traditional diffusion models often apply a uniform noise distribution during sampling, which fails to capture the unique structural and biochemical distinctions of specific receptors. RSDM addresses this by introducing a customized sampling process that leverages receptor-specific noise distributions, thus creating a more tailored perturbation for each ligand-receptor pair."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper would benefit from a clearer explanation of why an Equivariant Graph Neural Network (EGNN) is particularly well-suited for addressing the protein-protein docking problem. Providing more background on EGNN’s advantages in capturing molecular interactions could better highlight its relevance and strengthen the motivation for its inclusion in this context.\n- The process of identifying binding sites and integrating this information into the diffusion model is not fully detailed. It would be more helpful to provide a more comprehensive description of the methods used to determine binding sites and their role in the generation process."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose a novel receptor-specific diffusion model towards generating protein-protein structures with customized sampling"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024receptorspecific,\ntitle={Receptor-Specific Diffusion Model: Towards Generating Protein-Protein Structures with Customized Perturbing and Sampling},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3zWvZv9xFh},\nnote={under review}\n}"
},
"abstract": {
"value": "Recent advancements in deep generative models have significantly facilitated protein-ligand structure design, which is crucial in protein engineering. However, recent generative approaches based on diffusion models in this field usually start sampling from a unified distribution, failing to capture the intricate biochemical differences between receptors. This may limits their capacity to generate reliable ligands for the corresponding receptors. Moreover, the current sampling process incurs a heavy computational burden and inefficiency, which further escalates the training demands on the model. To this end, we introduce a novel diffusion model with customized perturbing and sampling for the ligand design targeting the specific receptor, named as Receptor-Specific Diffusion Model (RSDM). In particular, the receptor-specific information is used to tailor fine-grained sampling distributions via changing the noise for customized perturbing. Meantime, we refine the sampling process using a predefined schedule to perform stepwise denoising and gradually decrease the influence of the receptor's guidence in the ligand generation for customized sampling. The experimental reaults indicate that RSDM is highly competitive with state-of-the-art learning-based models, including recent models like ElliDock and DiffDock-PP. Additionally, RSDM stands out for its faster inference speed compared with all baseline methods, highlighting its potential for generating dependable protein-ligand."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"protein structure prediction",
"diffusion model",
"graph neural network"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/d202c544a70d66079ffa06bd0b8af130301493b9.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to physical sciences (physics, chemistry, biology, etc.)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/a55fb787a132a7cb496e943037f78e2fd26c6a98.zip"
},
"title": {
"value": "Receptor-Specific Diffusion Model: Towards Generating Protein-Protein Structures with Customized Perturbing and Sampling"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
3zw9NhLhBM | Towards better generalization: Weight Decay induces low-rank bias for neural networks | main | Active | Low-rank bias;ReLU Neural Networks;Generalization Error;Implicit regularization;SGD;Weight Decay | learning theory | 1;1;3;3;3 | 5;5;4;3;4 | 1;2;2;3;2 | 1;1;1;2;2 | 1;2;2;2;3 | 2.2 | 4.2 | 2 | 1.4 | 2 | -0.872872 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please address the issues from the “weaknesses” section."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper considers an interesting and fundamental question for understanding generalization in overparameterized networks. It provides some nice insights on this question."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies the implicit bias towards rank minimization and its implications on generalization. They show that mini-batch SGD with WD converges to low-rank solutions in two-layer networks (under certain assumptions). They also provide a generalization bound for low-rank networks, which might explain how rank minimization helps generalization."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The authors claim that the assumption about the small norm of the gradients simultaneously for all batches might be too strong. But even in Theorem 2.6 they make this assumption approximately up to some epsilon, and it’s not clear whether the epsilon in this assumption is small. \n- The experiments compute this epsilon, but I don’t understand whether they imply that this epsilon is sufficiently small. For example, in Figure 3 they show that epsilon is at most 12. If we use $\\mu_V=1$, then the bound from Theorem 2.6 becomes $2B \\cdot 12 = 24B$. I don’t see in the paper what the value of $B$ is in this experiment (maybe it’s $16$ as in Figures 1 and 2?). Then, the authors compare it to the Frobenius norm of a random Gaussian matrix with variance $0.01$, which is $26$. I don’t understand why the authors chose $0.01$, and I don’t see why $26$ is large compared to $24B$. \n- Lemmas 2.1 and 2.2 hold for almost all matrices V. However, in the proofs of Theorems 2.4 and 2.6, they use this property for the convergence point $V^*$. The convergence point is not a random point, and hence the fact that almost all matrices satisfy a certain property does not imply that this property holds for the convergence point. Essentially, the authors assume that at convergence there are no hidden neurons with input $0$, but they don’t state this assumption explicitly. \n- Regarding the generalization bounds:\n - Corollary 3.9, as well as Theorem 3.11, depend on $L^2$. If we assume w.l.o.g. that the inputs are in the unit ball, then $L$ can be upper bounded by the product of the spectral norms of the layers. Then, the authors should compare their bound to well-known sample complexity bounds for neural networks. (see, e.g., Golowich, Rakhlin, Shamir, “Size-independent sample complexity of neural networks”, and the reference therein).\n - The generalization bound holds only for low-rank matrices, and not for the “approximately low-rank” case. That is, it does not work with Theorem 2.6. Moreover, the authors claimed in section 2.2.2 that the assumption in Theorem 2.4 might not be feasible, and hence Theorem 2.6 is the more realistic result."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See the questions asked in the weaknesses above."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper studies an important problem of understanding the regularization effect of weight decay and SGD optimization on learning with Neural Networks that can memorize the training data. This is an active area of research, and the paper has correctly identified a gap in the literature. The paper is also written clearly, and little background knowledge (beyond what is discussed) is needed to understand the results. I like the brief mention of the extension in the discussion, which hints at a data/batch-adaptive regularization strategy. Such a result/algorithm would be interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies the implicit bias towards low rankness in two-layered RELU networks trained with SGD and weight decay. In the regime where the network nearly interpolates the data/achieves stationarity across all batches, the authors show that the network's weight matrix is close to a rank one matrix in the Frobenius norm. Along with existing tools in learning theory, this implies an improved generalization bound for two-layered RELU networks. The authors also conducted experiments on small datasets to verify their predictions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Theorems 2.4 and 2.6**: I have several concerns about the theorems, which I believe are the main results of this paper. Section 2.2.1 seems redundant, and does not serve any purpose, given Theorem 2.4 is only a special case of Theorem 2.6. Why do the authors choose to discuss Theorem 2.4? Beyond the setting where $\\epsilon=0$, why does the upper bound 2.6 not gradual decline as we reduce $\\mu_V$ down to zero? Intuitively $V^\\star$ should be approximable by matrices of higher and higher rank, even when weight decay is driven to zero. Also, a minor but important point here is that both the theorems assume $\\mu_V>0$. The upper bound in 2.6 does not look tight when batch sizes are high. Why would that be the case? Should the rank-minimizing effect decrease when we use GD instead of SGD? If so, why? Also, ensuring $\\epsilon$ is small for all batch sizes is not impossible in the interpolation regime. In that regime how does the result compare to the more general (albeit without SGD optimization and weight decay) result of Ongie and Willett? Finally, the results seem too weak, as the authors do not show the effect of depth, and the proofs themselves are not very insightful about what will happen when depth increases. \n\n2. **Generalization results**: The calculations for the generalization error of low-rank neural networks seem incremental. They are likely not novel, as all the tools needed to derive these results already exist in the learning theory literature. Having said that, I can not confirm this, as I could not find the exact result in existing literature myself. More importantly, the results only seem to make sense in the interpolating regime. This is because Theorem 3.11 assumes Assumption 2.3. Is it not possible to give a result in terms of epsilon from Assumption 2.5 to understand the effect on approximate sationarity? \n\n3. **The experiments are too simplistic and unsurprising**: It seems that the empirical results in are on less diverse tasks than [[1]](https://arxiv.org/abs/2408.11804), which looks at more practical deep learning tasks with more complex architectures showing a low-rank bias across them. While [[1]](https://arxiv.org/abs/2408.11804) does not give a theoretical result, they provide a more comprehensive picture of the effect of weight decay. Moreover, the authors do not verify their assumptions or even plot their predicted upper bound in their experiments, which would have verified (or shown slackness of) their theorem. \n\n4. **Missing related works**: The paper is missing several related works about the effect of weight decay on neural network training. I encourage the authors to go through section 5 of [[1]](https://arxiv.org/abs/2408.11804)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "I believe that the paper does not properly acknowledge that they used techniques that appeared in previous work of Galanti and Poggio (2021) and Xu et al. (2023), neither in Sec. 2.2 where the results appear not in the proofs themselves where the techniques are employed."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. The paper states that the results do not assume weight convergence, yet Assumptions 2.3 and 2.5 appear to imply it. Could you clarify how you define convergence in this context and why Assumptions 2.3 and 2.5 are not considered convergence assumptions?\n\n2. Could you elaborate on how your approach to analyzing low-rank bias fundamentally differs from that in Galanti and Poggio (2021) and Xu et al. (2023)? Given the apparent similarities in proof technique and assumptions, what distinct contributions does this paper offer in Sec. 2.2 beyond those works that is not just weakening Assumption 2.3 by taking $\\epsilon$ into account?\n\n3. Could you provide practical examples or empirical evidence to support the feasibility of the term \n$\\frac{C \\epsilon}{\\| V^* \\|}$? Given that previous analyses (such as in Galanti et al. 2.3) can simply show that this term is typically large how do you justify its practicality in your bounds?\n\n4. Given that neural network weight matrices in practice do not typically converge to rank-2 matrices, how do you envision the practical implications of your theoretical rank assumptions?"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "I believe that the overall goal of the paper is interesting. The authors attempt to study the implicit complexity of neural networks trained with SGD and regularization and to connect it to the test performance of the resulting neural network. I think it's an interesting problem to explore how the rank of the weights contributes to the model's performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper investigates how weight decay (WD) induces a low-rank bias in neural networks, leading to better generalization. The authors prove that, under sufficient training with weight decay and stochastic gradient descent (SGD), a two-layer ReLU neural network’s weight matrix approaches a rank-two structure. Empirical evidence supports this claim across regression and classification tasks, showing that weight decay encourages low-rank matrices even without conventional assumptions about data distribution or specific network architectures.\n\nThis low-rank bias contributes to reduced generalization error. The authors theoretically derive improved error bounds by leveraging the low-rank property and confirm these bounds with experiments on the California housing and MNIST datasets. Specifically, larger weight decay values reduce the rank of the weight matrix, decreasing generalization error. This study offers insights into the regularizing effect of SGD with weight decay, suggesting that low-rank bias is an implicit mechanism for improved generalization in neural networks. The paper's findings could extend to other architectures and optimization methods in deep learning."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I believe the paper is quite misleading. The authors claim they do not rely on any assumptions about the convergence of the weights, but this is inaccurate.\n\n1. Table 1 compares their results on low-rank bias with previous work. However, the authors' findings are very similar to those of Galanti and Poggio (2021) and Xu et al. (2023). The analysis follows a nearly identical approach: it involves writing out the formula for the training step, assuming small gradient steps (as in Assumptions 2.3 and 2.5 of the current paper) across all batches, examining the difference between gradient steps for two batches that differ by one sample, and then using this gap to suggest that the learned weights are close to a low-rank weight matrix. The authors never acknowledge that this proof technique was introduced in Galanti and Poggio (2021) and Xu et al. (2023). The main differences across these works are that Xu et al. (2023) used a slightly different optimization setting (e.g., training with SGD, WD and WN) and an assumption similar to 2.3 to justify that the weights converge to a low-rank matrix, while Galanti and Poggio (2021) relaxed the assumption, considering a scenario similar to 2.5. Both previous papers offer results that hold for a wider range of architectures than the current one.\n\n2. The authors claim their results hold without assuming the convergence of the weights, but this is incorrect. Both Assumptions 2.3 and 2.5 are essentially convergence assumptions. While Assumption 2.3 makes this more explicit, Assumption 2.5 implies the same concept. Furthermore, the authors do not justify the practicality of the quantity $ C\\epsilon/\\|V^*\\|$, which is, unfortunately, unrealistic. To illustrate, following the analysis in Galanti et al. 2023 [https://arxiv.org/abs/2206.05794], we can consider the last equation in their paper's page 10 and easily derive that $\\|V^*\\| \\approx \\|\\frac{\\partial L}{\\partial V}\\|/\\mu_V = \\epsilon/\\mu_V$. This implies $C\\epsilon/\\|V^*\\| \\approx 2B$, which is a constant larger than 1. Therefore, the bound in Theorem 2.6 is trivially true.\n\nThis issue also applies to Galanti and Poggio (2021), which was addressed in their recent paper Galanti et al. 2023 [https://arxiv.org/abs/2206.05794]. It appears that the authors of the current paper overlooked this work, which similarly aims to demonstrate that SGD + WD induces a low-rank bias in deep learning. In this paper, the authors show that the rank of weight matrices is bounded by a function dependent on batch size, regularization parameter, and learning rate. These results hold for most architectures of interest without requiring assumptions about the data. By unrolling the training process of SGD + WD, they avoid assumptions like 2.3 and 2.5, making only the weaker assumption that the norms of the weights converge.\n\n3. The result in 3.11 is also not very new. The result is based on a VC-style generalization bound applied to a neural network whose weight matrices are low rank. When combined with UV decomposition, we can represent the network as a network with 3 layers where each one of them includes a small number of parameters. I admit that I don't recall a specific paper that explicitly runs this derivation, but it's fairly trivial if I am being honest.\n\n4. The narrative presented in this paper is somewhat simplistic. By relying on a very strong assumption (2.3 or 2.5), it derives an unrealistic result about matrix rank, which then leads to a very tight generalization bound for these specific solutions. However, in practice, the weight matrices of neural networks do not converge to rank-2 matrices. Furthermore, the rank of these matrices does not fully explain neural networks' success and only has a marginal effect on the performance (we can easily train neural networks that generalize well on common datasets without weight decay). A more compelling direction would be to explore the \"in-between\" scenario: understanding the actual behavior of rank in practice without assuming strong convergence assumptions (similar to what is done in Galanti et a. 2023) and its marginal impact on performance."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "None."
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "**None**"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "TLDR: this paper contains **zero** contribution -- all of the results presented are either trivial or wrong."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "*This paper beyond any redemption!* I will just point out a few issues that I am most frustrated with.\n\n\nThe introduction is quite a wild ride. I feel that someone without intimate knowledge of the area will immediately be confused. Notably, the term \"implicit regularization/bias\" was introduced without a proper explanation. And the problem as defined in eq. (1-3) is lacking in context (I think a few sentences is enough, but jumping directly into equation certainly looks very awkward).\n\n\nDubious choice of references:\n1. Some well-known work in the space were not mentioned. For example, [1, 2] are two of the earliest and most-cited works in the analysis of GD's implicit bias, but these are not referenced in this paper.\n2. On line 50, NTK is definitely not the only reason why optimization error goes to zero as model size grows. There are plenty of other works covering this topic, e.g. [3, 4]. The authors should consider attributing a more diverse set of works.\n3. On line 58, the author claimed that \"these implicit biases are not theoretically understood yet\" while the references on the previous line all provide theoretical contributions to this question.\n4. Table 1 needs to be better explained. I struggle to understand what it exactly means.\n\n\n**The proof of Theorem 2.4 is broken**. First, line 665 does not work when the function $g$ is constant, which corresponds to the standard practice of a fixed, data-independent weight decay factor. Secondly, assumption 2.3 contradicts the claim the results do not rely on the convergence of weights.\n\n\n**The entirety of Section 3 contain no new results at all** All of the stated results are either direct citations or trivial applications of said existing results. The proofs for this section do not even span half of a page, yet this is advertised as a major technical section.\n\n\nThe experiments in Section 4 is very lacking. Results on MNIST have very little implication on generalization since plenty of ''small'' models work well for this. This is especially damning since there is very little theoretical contributions that need numerical verification.\n\n\nIn summary, this paper contains no meaningful contribution and authors fail to demonstrate a deep understanding of this topic. **I recommend rejection without a shred of doubt.**\n\n\n[1] Soudry D, Hoffer E, Nacson MS, Gunasekar S, Srebro N. The implicit bias of gradient descent on separable data. Journal of Machine Learning Research. 2018.\n\n[2] Ji Z, Telgarsky M. The implicit bias of gradient descent on nonseparable data. In Conference on learning theory. 2019.\n\n[3] Belkin M, Hsu D, Ma S, Mandal S. Reconciling modern machine-learning practice and the classical bias–variance trade-off. Proceedings of the National Academy of Sciences. 2019.\n\n[4] Du S, Lee J, Li H, Wang L, Zhai X. Gradient descent finds global minima of deep neural networks. In International conference on machine learning. 2019."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "I would personally appreciate some clarification on the weaknesses. Moreover, I would appreciate discussing with the authors the following, hoping there was no misunderstanding:\n\n1) It is not clear to me what is the interest of studying the case for which $\\mu_V = \\frac{1}{B} \\sum_{j \\in S'} g(x_j,x_j)$ ? I understand that for a constant $g$ this leads to the typical case, but what is the gain in considering this more complicated case in the analysis? In the proof this leads to rank-2, but I cannot think of an example in which this kind of choice for the regularization parameter is done in applications.\n2) The analysis doesn't rule out the possibility of $V^*$ being in $\\mathcal V_{0,i}$ for some $i$, so to me it is essentially just the analysis of a deep-linear model with $L=2$ layers, which has been done already multiple times in literature."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper is overall well written. The claims, proofs and motivation of the work are clearly presented. Moreover, the problem of implicit biases and their implications in generalization theory is a very significant topic."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors prove an implicit bias toward low-rank (one or two) matrices in training two-layer RELU neural networks with stochastic gradient descent and weight decay. By exploiting this implicit bias towards the set of rank-two matrices, the authors were able to derive tighter generalization bounds."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The proposed work doesn't study any dynamical conditions, thus to kind of convergence to the claimed points was presented.\nI would say the results are more on the line of studying fixed points of the SGD dynamics. While I can intuitively believe assumption 2.5, the authors don't present any theoretical guarantee or even empirical evidence suggesting that the dynamic in practice drives onto regions in parameter space for which that assumption is satisfied.\nGiven that the second part of the paper relies on the first one as a motivation, I believe also that the soundness of the results in the second part decreases (please see above and weaknesses).\nAlso, the experimental setting is too restricted."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Both theoretically and empirically, we show that weight decay leads to low rank bias of neural network and better generalization performance."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024towards,\ntitle={Towards better generalization: Weight Decay induces low-rank bias for neural networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3zw9NhLhBM},\nnote={under review}\n}"
},
"abstract": {
"value": "We study the implicit bias towards low-rank weight matrices when training neural networks (NN) with Weight Decay (WD). \nWe prove that when a ReLU NN is sufficiently trained with Stochastic Gradient Descent (SGD) and WD, its weight matrix is approximately a rank-two matrix. \nEmpirically, we demonstrate that WD is a necessary condition for inducing this low-rank bias across both regression and classification tasks. \nOur work differs from previous studies as our theoretical analysis does not rely on common assumptions regarding the training data distribution, optimality of weight matrices, or specific training procedures. \nFurthermore, by leveraging the low-rank bias, we derive improved generalization error bounds and provide numerical evidence showing that better generalization can be achieved.\nThus, our work offers both theoretical and empirical insights into the strong generalization performance of SGD when combined with WD."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Low-rank bias",
"ReLU Neural Networks",
"Generalization Error",
"Implicit regularization",
"SGD",
"Weight Decay"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/837bea498f8807eafc3725312f0049010880b267.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning theory"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Towards better generalization: Weight Decay induces low-rank bias for neural networks"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
4011PUI9vm | RankSHAP: Shapley Value Based Feature Attributions for Learning to Rank | main | Active | Feature attributions;Shapley values;Information Retrieval;Passage Reranking | interpretability and explainable AI | 5;5;6;8 | 4;3;2;4 | 2;3;3;3 | 2;2;3;3 | 2;3;4;4 | 6 | 3.25 | 2.75 | 2.5 | 3.25 | 0.246183 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "No obvious concerns but the paper presents a user study which was IRB approval. I believe that is sufficient. However, I have not reviewed the approval."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "- Since the model can be black box, what if the model is itself self-contradictory or inconsistent? For example, a listwise ranking models that ranks documents A and B as A>B>C in the presence of C but as D>B>A in the presence of a document D. Would the explanations provided by RankSHAP in such a scenario inconsistent or unfaithful? Similarly, what happens when the model handles uncertainly or chooses to use stochastic rankings (drawing a different sample from a distribution over permutations), e.g., Singh, Kempe, Joachims. Fairness in Ranking under Uncertainty (2021).\n- Can you explain why the fidelity scores are different for the two datasets? You mention that it is due to the difference in dataset sizes but I am not sure if it is clear if that is the reason. \n- See weaknesses section too."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Axiomatic Foundation: I appreciate that the authors propose a set of fundamental axioms specifically tailored for ranking feature attributions, drawing inspiration from Shapley values in coalitional game theory. These axioms, namely Rank-Efficiency, Rank-Missingness, Rank-Symmetry, and Rank-Monotonicity, ensure that the attributions are fair, consistent, and meaningful.\n- Generalized Ranking Effectiveness Metric (GREM): The authors introduce GREM, a generalized framework for evaluating the effectiveness of ordered lists. This framework encompasses widely used metrics like NDCG and provides a solid theoretical foundation for assessing the quality of rankings produced by considering feature subsets.\n- Computational Feasibility: Acknowledging the NP-completeness of exact Shapley value calculations, the authors propose an approximate algorithm to leverage a linear model between feature subsets and the ranking effectiveness metric, as well as Kernel-RankSHAP to induce non-linearity into this model. This makes RankSHAP practical for real-world applications.\n- Extensive Empirical Evaluation: The authors conduct comprehensive experiments on two datasets (MS MARCO and Robust04) using multiple ranking models, including BM25, BERT, T5, and LLAMA2. The results demonstrate RankSHAP's superior performance over competing methods like EXS, RankLIME, and RankingSHAP across various metrics like Fidelity and weighted Fidelity.\n- User Study Validation: The paper includes a user study to assess the alignment of RankSHAP with human intuition. This is an interesting study that tasks participants with re-ordering documents and inferring queries based on feature attributions. The results show that RankSHAP significantly improves user understanding of ranking decisions."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes RankSHAP as a framework for explaining how features contribute to a ranking model's output. The authors extend the classifcal Shapley value concept to the ranking domain by specifying two axioms that a ranking-based feature attribution must satisfy, in additon to the set of four fundamental axioms that Shapely values already satisfy. The authors argue that current methods for explaining ranking models often provide inconsistent or contradictory explanations, making it difficult for users to understand model behavior. These axioms, which are based on game theory and information retrieval principles, ensure the fairness, consistency, and reliability of the explanations. Through extensive experiments, the authors demonstrate that RankSHAP outperforms existing methods in terms of accuracy and alignment with human intuition."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- User study caveats: (a) Preconceived Notions: The authors observed that randomly generated feature attributions achieved a higher concordance score in the re-ordering task than expected based on their metric evaluation. This suggests that participants might have relied on pre-existing assumptions or biases about the topics, potentially influencing their judgments throughout the experiment. Is that a drawback of the setup that also impacts rest of the observations?\n(b) Subjectivity: The authors noted significant variance in the queries estimated by participants for the same document set and feature attributions. This probably highlights the inherent subjectivity in interpreting feature attributions and formulating queries, which can lead to diverse responses and impact the evaluation's reliability. \nDespite limitations, it is indeed on interesting study to include in the paper since progress in the field of interpretability requires human subjects to be involved. \n- Dependence on Relevance Scores: The effectiveness of RankSHAP relies on the availability of accurate relevance scores for each query-document pair. Obtaining these scores often necessitates ground truth labels, which can be scarce or unavailable. The paper proposes using implicit measures like click-through rates to infer relevance when explicit labels are absent.\n-"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "How does RankSHAP perform when features are highly interdependent? Are there adjustments made for such cases?\n\nHow does the RankSHAP framework handle scenarios where relevance scores are subjective or unavailable?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The introduction of axioms specific to ranking provides a robust framework, distinguishing RankSHAP from other feature attribution methods.\n\nThe authors incorporate a user study to validate that RankSHAP explanations align with human understanding, which adds practical value."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a Shapley-value-based feature attribution method tailored specifically for ranking tasks. Traditional feature attribution methods, mostly developed for regression or classification, often produce conflicting results when adapted to ranking, which can lead to confusion among end-users. RankSHAP addresses this by adhering to a set of axioms tailored for ranking. Extensive experiments demonstrate that RankSHAP aligns well with human intuition and outperforms existing methods in fidelity and weighted fidelity. Additionally, a user study confirms its practical value in helping users understand model decisions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "RankSHAP’s reliance on relevance scores for accurate NDCG calculations could be a limitation in scenarios where relevance is difficult to quantify or subjective.\n\nAlthough RankSHAP was tested in a user study, the evaluation might have limited generalizability due to sample size."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Are there experiments with GREM metrics other than NDCG? While the choice of NDCG is understandable (appendix C), experiments with other metrics are needed to demonstrate the validity of the proposed method across different metrics, which would strengthen its generality."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper introduces a thoughtful adaptation of the Shapley value for the ranking domain, defining new ranking-specific properties that enhance SHAP's applicability in ranking contexts.\n- The authors have conducted both performance evaluations and a user study to validate their method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents RankSHAP, an extension of SHAP value tailored to interpret learning-to-rank models. The authors reinterpreted the Shapley value axioms (Rank-Efficiency, Rank-Missingness, Rank-Symmetry, and Rank-Monotonicity), and defined new properties (Relevance Sensitivity and Position Sensitivity) to better capture the unique requirements of ranking models in feature attribution. To demonstrate practicality, they integrated the NDCG metric into KernelSHAP and validated their method through experiments."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The proposed method, RankSHAP, heavily relies on the KernelSHAP method [1] with modifications to incorporate NDCG for ranking applications. Rather than introducing a fundamentally new method, the paper adapts an existing approach specifically for ranking tasks. While the axiomatic reformulation is valuable, the technical novelty beyond extending KernelSHAP with NDCG remains limited.\n\n[1] Scott M Lundberg and Su-In Lee, A Unified Approach to Interpreting Model Predictions, in NeurIPS, 2017."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "I think RankSHAP is a nice contribution to ranking explainability, but improving the handling of bias, refining its reliance on averages, and clarifying its application scope could elevate its impact further."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper provides a good motivation for RankSHAP by discussing the limitations of simpler ranking explanation methods, like LIME, which lack consideration for full feature interaction and listwise ranking structure. By addressing these limitations, RankSHAP presents a more nuanced and comprehensive approach to ranking explainability.\n\n2. RankSHAP's performance is evaluated using fidelity metrics, which assess how accurately RankSHAP captures feature contributions in line with the model’s scoring function. This fidelity-based evaluation aligns RankSHAP’s explanations with the ranking model’s actual logic, resulting in explanations that are reflective of the ranker's structure rather than arbitrary feature importance.\n\n3. The method is flexible since RankSHAP can work with a range of ranking models, from traditional learning-to-rank approaches to neural ranking models. This makes RankSHAP’s applicable to many fields, including search, recommendation, and information retrieval."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The RankSHAP method leverages Shapley values, considering all possible feature combinations and their interactions (or approximations thereof), to provide a detailed, interaction-aware explanation of feature contributions in ranking models. This approach aligns well with the feature importance analysis used in regression models, making the methodology intuitive."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Correlation, Not Causation**: While RankSHAP provides insight into feature importance, the explanations are inherently correlational, not causal. Therefore, while RankSHAP can reveal which features push an item up in ranking, this does not imply that these features are directly causing higher ranks. I'm not saying the authors claim causality, but in terms of doing \"better\" at explaining ranking, they're explaining correlations between the rank and the features better, and not want causally drives a higher ranking. \n\n2. **Model Dependency and Bias**: The effectiveness of RankSHAP is fundamentally dependent on the quality and calibration of the ranker itself (correct?). If the scoring function is miscalibrated or biased (e.g., favoring higher-ranked items based solely on position), RankSHAP’s explanations may reinforce these biases instead of offering corrective insights. The authors could discuss how potential biases (position, selection, algorithmic biases) within the ranker affect the usefulness of RankSHAP’s explanations to be truly meaningful.\n\n3. **Limitations in Simple Averages for Shapley Values**: The reliance on simple averages in traditional Shapley values means that RankSHAP does not consider the number of interactions (N) or the variance in a feature’s impact across different contexts. This averaging could dilute the significance of features with occasional strong influence, potentially overlooking context-dependent importance and limiting RankSHAP’s insights.\n\n4. **Reinforcement of Potentially Polarized Content**: Because RankSHAP's explanations reflect the ranker’s scoring function, any inherent polarization or bias in the model could be echoed in the explanations. This could inadvertently support polarized content if the ranking model favors certain types of data, raising concerns for applications in content moderation and recommendation.\n\n5. **Redundancy in Presentation and Scope of Rankers**: The paper does not clearly specify the types of ranking models used in its evaluation, which, I think could and maybe should matter? Or at least will help elucidate where it's particularly useful (I imagine the more non linear the ranker the more useful?)"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Shapley value based feature attributions for Ranking tasks"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024rankshap,\ntitle={Rank{SHAP}: Shapley Value Based Feature Attributions for Learning to Rank},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=4011PUI9vm},\nnote={under review}\n}"
},
"abstract": {
"value": "Numerous works propose post-hoc, model-agnostic explanations for learning to rank, focusing on ordering entities by their relevance to a query through feature attribution methods. However, these attributions often weakly correlate or contradict each other, confusing end users. We adopt an axiomatic game-theoretic approach, popular in the feature attribution community, to identify a set of fundamental axioms that every ranking-based feature attribution method should satisfy. We then introduce Rank-SHAP, extending classical Shapley values to ranking. We evaluate the RankSHAP framework through extensive experiments on two datasets, multiple ranking methods and evaluation metrics. Additionally, a user study confirms RankSHAP’s alignment with human intuition. We also perform an axiomatic analysis of existing rank attribution algorithms to determine their compliance with our proposed axioms. Ultimately, our aim is to equip practitioners with a set of axiomatically backed feature attribution methods for studying IR ranking models, that ensure generality as well as consistency."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Feature attributions",
"Shapley values",
"Information Retrieval",
"Passage Reranking"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a0d4afebb3c2733d6acd046feeddb894fab4ed79.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/67f18637549cb19858eac3dcef770e01fdade4da.zip"
},
"title": {
"value": "RankSHAP: Shapley Value Based Feature Attributions for Learning to Rank"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
40BTVvYQWZ | Learning and Steering Game Dynamics Towards Desirable Outcomes | main | Active | game dynamics;system identification;model predictive control;sum of squares optimization;steering | learning on time series and dynamical systems | 3;3;5;6;6 | 4;2;3;3;2 | 2;2;3;3;3 | 2;2;2;2;2 | 3;2;4;3;3 | 4.6 | 2.8 | 2.6 | 2 | 3 | -0.275839 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See above"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The problem is well motivated and the paper is overall easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work studies how to steer game dynamics towards desirable outcomes. To do so, the authors introduce a framework that combines side information assisted regression and model predictive control. The framework first tries to perform a system identification step to approximate the control dynamics and subsequently utilizes MPC to steer the system. The authors also give several empirical studies on games, demonstrating the effectiveness of the proposed framework."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The originality and significance of the contributions seem limited. The framework primarily extends existing techniques, coupled with well-studied MPC approaches. It is not clear how the contributions could be translated into broader insights for the community. Also, is it possible to provide some theoretical justification?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- what if there is error in the dynamics modelling step, what will happen in the MPC phase? Can MPC accomodate the error?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Most of this paper is well-structured and well-written."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper propose Side Information Assisted Regression with Model Predictive Control (SIAR-MPC), a framework to learn the dynamics of game and steer game dynamics towards desirable outcomes when data is scarce. This framework has two components, which includes system identification part and MPC part. In system identification step, the algorithm approximated the controlled dynamics using only a limited number of samples. Second, in MPC step, based on the learned dynamics, MPC is applied to steer the system towards a desirable outcome. This framework is evaluated in data-scarce settings and show this framework have superior performance compared to other baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The font in the plots could be larger, it is relatively hard to read. The introduction of RFI could be more detailed in section 4.1.\n- The effectiveness of the algorithm in data-scarce setting could be emphasized more in the experiments, it is interesting to see how the performance is affected when the avalibility of the data varies."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "How should the reader be interpreting these incentives in the context of the games studied?\n\nHow scalable are these approaches to larger settings? Are there fundamental barriers to scaling here or is there hope of overcoming dimensionality-related limitations?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The problem of identifying and steering game dynamics is both difficult and seems to be understudied, though I am not a domain expert. The approach described in the submission seems sensible and technically interesting.\n\n---\n\nOverall, I am not sure how useful the submission is, but I found its object of study and ideas interesting. I consider the latter to be enough to merit acceptance. On the other hand, I am not an expert in this domain, so both my perception of the submission's strengths and weaknesses should be treated with a non-zero amount of skepticism. I left my confidence low so as to leave room for reviewer's who may feel more confident in their expertise on the subject matter."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The submission investigates the problem of steering game unknown game dynamics. The submission's approach involves first identifying these dynamics by extending SIAR to control settings. Then, it uses MPC to adjust steer these dynamics. The submission gives examples of its approach for stag hunt, matching pennies and epsilon-rock-paper-scissors."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The submission doesn't do a very good job communicating how the reader ought to be interpreting these incentives.\n2. The submissions touts the \"diverse range of games\" on which it performs experiments. In fact, it performs experiments on 2 2x2 matrix games and 1 3x3 matrix game. I would hesitate to call this diverse.\n3. The submission notes the \"larger dimensionality\" of rock-paper-scissors. This strikes me as somewhat concerning. If the dimensionality of rock-paper-scissors is already noteworthy in the context of the method, is there much hope of applying it to more interesting settings?\n4. I think the first paragraph of section 5.2 could be clearer. In the first part of the paragraph, the submission explains that replicator dynamics and learning algorithms with non-vanishing regret possess undesirable behavior. Thereafter, it states \"In that regard, ... we demonstrate the performance SIAR-MPC in steering [learning dynamics of non-vanishing regret].\" If I am reading between the lines correctly, the submission is meaning to communicate something positive---that it successfully steers learning dynamics with undesirable properties. But the writing doesn't effectively get that point across, in part because there is no previously mentioned \"regard\" that makes the sentence read correctly."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. Can the authors clarify on the technical novelties of this paper? For example, one contribution of this paper is its superiority in data-scarce settings. However using PINN enhances performance in this setting is straightforward to me. Is there some technical difficulty I am missing here?\n2. For the central planner, steering is not free. Larger $\\omega$ is clearly more costly in real-world application. Should we compare the algorithms under the fixed budget?\n3. This paper places no constraint on how we choose $\\omega$ in the first phase. On the one hand, we cannot intervene a real-world game arbitrarily at our wishes, so this seems to be a strong constraint. On the other hand, if we allow online learning, i.e. adaptively picking $\\omega$ so that the data is more informative, the sample complexity should be even lower. Is online learning a more natural setting?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is clearly written and easy to follow.\n2. This framework is very general. Only reasonable constraints are placed to enhance the sample efficiency of the learning."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the problem of steering agent behaviors in normal-form games.There is a central planner being able to influence the game's utility function. The agents change their policy according to the current state of the game. The paper proposes the SIRC-MPC framework. In this framework the planner first learn the agent's behavior by fitting the dynamics with polynomial regressors. To facilitate the learning, RFI and PC are incorporated as regularizations. Then it steers the behavior via a MPC. Finally it conducts experiments to illustrate the effectiveness of this framework."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The technical novelties of this paper is a bit unclear to me. See questions below.\n2. The motivation of this paper is a bit unclear to me. See questions below."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.\tWhy did the authors choose these two specific side-information constraints among all possible options as listed in the reference?\n\n2.\tIn every experiment, there is only one initial reward matrix. Can the proposed method achieve similar performance with different reward values?\n\n3.\tHow critical is MPC in this approach? How does the prediction horizon impact performance? It would be helpful if the authors could provide additional experiments to explore this."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.\tThe paper presents an interesting setting with a clear motivation, making it easy to follow and understand. The approach of first identifying system dynamics and then planning to control the system is particularly intriguing.\n\n2.\tThe use of polynomial regression with side-information constraints (RFI and PC) and the application of sum-of-squares (SOS) optimization shows a solid foundation in mathematical. The framework also leverages MPC effectively to solve constrained optimization problems dynamically."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a new framework, SIAR-MPC, to address undesirable behaviors in game dynamics. The framework consists of two steps: first, it identifies controlled system dynamics using a polynomial regressor, incorporating side information as additional constraints to improve the accuracy of the learned dynamics. Second, Model Predictive Control (MPC) is adapted to predict the desired control actions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tSome key concepts in the text lack clear definitions or explanations, which may confuse readers; further clarification is recommended. For example, the concept of \"side-information constraints\"(First shown at Page 2 Lines 56) is central to the paper, yet it lacks a clear definition and explanation. It’s not evident what constitutes side information and how it contributes to enhancing the accuracy of the learned controlled dynamics. The term \"Strategic Nature\" (Page 5, Line 230) is mentioned to justify the validity of the second side-information constraint. However, what actually plays a crucial role in supporting this constraint is the concept of Positive Correlation (PC). It's unclear why the authors introduced the notion of \"Strategic Nature\" in this context.\n\n2.\tLack of theoretical justification. The paper does not provide evidence that a polynomial regressor is sufficient to accurately capture system dynamics, especially given the limited number of samples (K=5). The two side-information constraints are proposed to aid in learning an accurate model of the controlled system dynamics with limited data. However, there is no theoretical justification provided on how these constraints contribute to this goal. This is particularly concerning given that the second step involves MPC, which requires a high-fidelity model. Additionally, the use of SOS optimization introduces further uncertainty in achieving a precise model.\n\n3.\tExperimental issues: In the first paragraph of Experiments (Page 6, lines 294), the neural network consisting of two hidden layers of size 5 is trained with only 5 samples, which arose the problem of underfitting. The maximum number of samples used in the training phase is 11, for such a scare data, the comparison between any neural network-based method with the proposed method is unfair. Additionally, the baselines (PINNs from 2019 and SINDYc from 2018) are relatively outdated. More recent methods, such as Phycrnet, are mentioned in the related work. Besides, in data-scarce settings, traditional linear programming methods like pseudospectral method, optimal control(based on the Pontryagin maximum principle) should be considered."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning and Steering Game Dynamics Towards Desirable Outcomes},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=40BTVvYQWZ},\nnote={under review}\n}"
},
"abstract": {
"value": "Game dynamics, which describe how agents' strategies evolve over time based on past interactions, can exhibit a variety of undesirable behaviours, including convergence to suboptimal equilibria, cycling, and chaos. While central planners can employ incentives to mitigate such behaviors and steer game dynamics towards desirable outcomes, the effectiveness of such interventions critically relies on accurately predicting agents' responses to these incentives---a task made particularly challenging when the underlying dynamics are unknown and observations are limited. To address this challenge, this work introduces the Side Information Assisted Regression with Model Predictive Control (SIAR-MPC) framework. We extend the recently introduced SIAR method to incorporate the effect of control, enabling it to utilize side-information constraints inherent to game theoretic applications to model agent responses to incentives from scarce data. MPC then leverages this model to implement adaptive incentive adjustments. Our experiments demonstrate the efficiency of SIAR-MPC in guiding systems towards socially optimal equilibria, stabilizing chaotic and cycling behaviors. Comparative analyses in data-scarce settings show SIAR-MPC's superior performance compared to pairing MPC with state-of-the-art alternatives like Sparse Identification of Nonlinear Dynamics (SINDy) and Physics Informed Neural Networks (PINNs)."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"game dynamics",
"system identification",
"model predictive control",
"sum of squares optimization",
"steering"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/48afd0fae63be9427560f2e41e23607094de3ba5.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on time series and dynamical systems"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/3092fe26b2259f37af0b1fc08e5e3f3e23a5a0ee.zip"
},
"title": {
"value": "Learning and Steering Game Dynamics Towards Desirable Outcomes"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
41HlN8XYM5 | Efficient Automated Circuit Discovery in Transformers using Contextual Decomposition | main | Active | Automated Circuit Discovery;Explainable AI;Interpretation;Machine Learning;Language Models;Transformers | interpretability and explainable AI | 5;6;6 | 3;3;3 | 3;3;3 | 3;3;3 | 3;4;3 | 5.666667 | 3 | 3 | 3 | 3.333333 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Line 20: \"CD-T is the first ---?\" there seems to be a missing word here"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Motivation, relevant scientific terms and prior works are well-written, making the paper accessible to researchers who are non-experts in this area. \n\n2. The proposed method can be easily used for all transformer architectures (encoder-decoder and decoder-only, as per line 72). Further, the proposed method significantly reduces the runtime required for inference."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel method CD-T that leverages contextual decomposition for mechanistic interpretability in transformers. CD-T's granularity is fine-grained up to attention heads (does not include MLP layers). They report the results of their method (classification metrics and faithfulness of the circuit as compared to randomly sampled circuits) for three tasks, namely indirect object identification, greater-than comparisons, and docstring completion."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Manual circuits are not fully explained (definition, how they are computed, cost of computation, etc.) - manual circuits are used as reference during evaluation (line 405), and it is necessary to provide these details regarding them. \n\n2. It is unclear how CD-T works for inference - is there a distinct circuit discovered for each inference datapoint, or is the circuit found on a broader scale, so as to tradeoff size vs. performance for a larger test set?\n\n3. The authors are encouraged to discuss the practical utility of CD-T. Once their method yields a circuit, could additional analyses, such as on attention heads, be conducted? For instance, are there heads that consistently produce positive or negative outputs across all data points, or do the importance of attention heads vary based on input data characteristics?\n\n4. What transformer architecture is used for the experimental results?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. See W1; In addition, what’s the computational complexity of the algorithm in terms of the input parameter?\n2. How does pruning affect the mechanistic interpretability of the model?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.\tThe paper is relatively well presented. It provides a clear description of the method, experiment details, and contribution. \n2.\tUsing the decomposition approach for designing/scaling LLM models is relatively novel and effective. Results show promising improvements in computational efficiency while maintaining circuit quality, and the method seems to be agnostic to the specific types of transformer models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces Contextual Decomposition for Transformers (CD-T) for building interpretable circuits in llms for better mechanistic interpretability in transformers efficiently. In contrast to other methods, CD-T employs a mathematical decomposition that isolates the contributions of specific features, allowing CD-T to discover circuits at various levels of granularity. Results show significant improvements in runtime and interpretability over existing methods such as ACDC and EAP on several tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe description of the algorithm can be improved. In particular, the algorithm description is relatively informal and could benefit from more details and formalism. For “Prune nodes from S for which doing so increases the task metric”, how is increasing the task metric evaluated?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1) I would suggest clearly presenting model architectures under evaluation and assumptions being made on input data for the approach - the mechanistic relevance metrics are ultimately calculated from the outputs derived from input data.\n2) How would this scale to large training datasets in LLMs? Is there a minimum dataset required? Can the authors comment on this?\n3) What is the effect of autoregressive models and sequence to sequence decoding on this approach? Is there a difference? I don't see a clear analysis in Section 5, and while that is not necessary it would be nice to have the authors comment on this in Section 6."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1) The paper's originality is in applying a novel formulation of an existing method - Context Decomposition - to transformer models, albeit to the attention heads of the transformer models. While novel, I am not in the space of circuit discovery in neural networks and unable to judge how novel this is. \n2) The presentation is clear and the limitations in data dependence and precomputation are made clear to the reader in algorithm pseudocode, which is also concise and clear. Experimentation is presented to clearly establish the work's superior performance over the baselines. Notation remains consistent through the paper to my eyes.\n3) Significance: The work presents the an automatic circuit discovery approach using CD (extending it as CD-T) in transformer models, and clearly outperforms its chosen baselines. The approach is moreover mechanistic and iterative rather than probabilistic and therefore also comes with a guarantee of circuit discovery given certain assumptions on available data and time."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes an automatic, mechanistic interpretability approach for finding circuits in transformer models. Their approach propagates backwards from the model output, finding a circuit of nodes most relevant to the given output, and from there the nodes most relevant to those nodes, and so on. These circuits are task-specific subgraphs in the larger model compute graph that are most stimulated (and most relevant) to the input type or task under consideration. \n\nThe authors consider circuit discovery for the attention heads of transformer models, choosing to leave the MLP layers in an attention block for later work. Their linear decomposition model is mechanistic in that it solely depends on the weights and biases of each layer of the network and the relevance metric calculated for the previous layer. It does appear to be data-dependent in that it still relies on a dataset for calculating these relevance metrics. It also relies on precomputed mean activations to do so."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) I am unsure of the significance of the work due to the iterative nature of the process and the need for data to adequately stimulate different 'tasks' (or behavior modes, perhaps?) and therefore discover circuits in the model. This is to my eyes not clearly explained.\n2) Prior work is noted to have conducted circuit discovery on large language models, and I can't easily find the models used for circuit discovery here. That lack of clarity is concerning, especially when using the manually discovered circuits as a target and prior art as baselines.\n3) I am also uncertain of scalability of this approach, considering that the network architectures under evaluation are not clear and that it has been restricted to attention heads rather than MLP layers. I am also uncertain of the effects of sequence length, autoregressive or non-autoregressive decoders. Section 5 does not make these things clear."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024efficient,\ntitle={Efficient Automated Circuit Discovery in Transformers using Contextual Decomposition},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=41HlN8XYM5},\nnote={under review}\n}"
},
"abstract": {
"value": "Automated mechanistic interpretation research has attracted great interest due to its potential to scale explanations of neural network internals to large models. Existing automated circuit discovery work relies on activation patching or its approximations to identify subgraphs in models for specific tasks (circuits). They often suffer from slow runtime, approximation errors, and specific requirements of metrics, such as non-zero gradients.\nIn this work, we introduce contextual decomposition for transformers (CD-T) to build interpretable circuits in large language models. CD-T can produce circuits of arbitrary level of abstraction, and is the first able to produce circuits as fine-grained as attention heads at specific sequence positions efficiently.\nCD-T is compatible to all transformer types, and requires no training or manually-crafted examples.\nCD-T consists of a set of mathematical equations to isolate contribution of model features. Through recursively computing contribution of all nodes in a computational graph of a model using CD-T followed by pruning, we are able to reduce circuit discovery runtime from hours to seconds compared to state-of-the-art baselines.\nOn three standard circuit evaluation datasets (indirect object identification, greater-than comparisons, and docstring completion),\nwe demonstrate that CD-T outperforms ACDC and EAP by better recovering the manual circuits with an average of 97% ROC AUC under low runtimes.\nIn addition, we provide evidence that faithfulness of CD-T circuits is not due to random chance by showing our circuits are 80% more faithful than random circuits of up to 60% of the original model size. \nFinally, we show CD-T circuits are able to perfectly replicate original models' behavior(faithfulness = 1) using fewer nodes than the baselines for all tasks.\nOur results underscore the great promise of CD-T for efficient automated mechanistic interpretability, paving the way for new insights into the workings of large language models."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Automated Circuit Discovery",
"Explainable AI",
"Interpretation",
"Machine Learning",
"Language Models",
"Transformers"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a2e5e8d284c1d3665cc72f5784b361f54ac4a0f5.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/f6ab6fe36ba2016c12265cf58d1be054670423cf.zip"
},
"title": {
"value": "Efficient Automated Circuit Discovery in Transformers using Contextual Decomposition"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
41WIgfdd5o | Learning a Fast Mixing Exogenous Block MDP using a Single Trajectory | main | Active | Reinforcement Learning;Reinforcement Learning Theory;Controllable Representations;Representation Learning;Exogenous Noise;Controllable Latent State;Unsupervised Reinforcement Learning | reinforcement learning | 1;5;6;6 | 3;4;2;3 | 3;3;3;4 | 3;2;3;4 | 3;3;3;2 | 4.5 | 3 | 3.25 | 3 | 2.75 | -0.171499 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- In the Appendix in equations 33 and 35, could you further explain how the sets $\\mathcal{D}_i^\\mathcal{A}$ and $\\mathcal{D}_i^\\mathcal{B}$ are constructed?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- The introduction and related work highlight this work really well. It explains the existing work nicely and shows where the gaps lie and how this work attempts to extend it.\n- The algorithm stands out in terms of the settings it covers compared to existing work. It deals with infinite trajectories, partial observability, and optimization with function approximators all while providing sample complexity guarantees.\n- The algorithm itself is designed very well and has a lot of interesting features which include: forcing a cycle of states through the repetition of actions and detecting the unique states in a cycle using a classifier oracle.\n- The limitations of the algorithm are clearly discussed with useful insights on how to extend this work in the future."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose Single-Trajectory Exploration for Ex-BMDPs via Looping (STEEL), an algorithm to learn the endogenous (controllable) states in an Exogenous Block Markov Decision Process (Ex-BMDP) when the agent is dealing with one continuous infinite trajectory without resetting to some known states. STEEL achieves this by taking actions that result in a predictable cycle of states and iteratively updating the list of known controllable states and their transitions. They show theoretically the sample complexity and correctness of STEEL with simulations on some small environments."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Section 4 can be a bit hard to follow. To quite understand how the algorithm exactly works one has to switch between reading the section text, the pseudocode, and parts of the Appendix. I suggest moving the pseudocode to the appendix and providing further explanation of the algorithm in the main text such that the reader can get a high-level idea of how the Algorithm works from just reading section 4.\n- There are parts of the algorithm that are not very intuitive and might require some further discussion. For example, it is mentioned that the dataset $D_0, D_1$ used in the CycleFind subroutine are generated in a way such that they are disjoint if $n'_{cyc}$ is equal to $n_cyc$. Intuitively, how does the selection process achieve this? \n- In the experiments section, the authors mention that previous work by Lamb et al.(2023) and Levine et al. (2024) don't have theoretical correctness guarantees, which can be why it seems to have better sample efficiency than STEEL. I suggest also including the percentage of runs where these baselines get the correct states and transition probabilities and how often they fail compared to STEEL which is proven to get it right with high probability. This can add additional value to how STEEL outperforms the baselines in terms of correctness."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "NA"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "NA"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors violated the instructions and reduced the font size substantially for Algorithm 1 and 2. Given they took a whole 10 pages, I decided to recommend desk rejection. If the AC decides differently, please inform me accordingly."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "NA"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* A discussion of the block assumption on $\\mathcal{Q}$ with respect to $\\mathcal{S}$ would be helpful. In many practical scenarios, the noisy nature of the emission (or observation) function can make distinguishing between two latent states directly from observations challenging, necessitating filtering techniques. It would be beneficial to clarify whether this assumption is not overly restrictive or if it cannot be easily weakened but is widely adopted.\n* Is there a known lower bound for the sample complexity of Ex-BMDP under deterministic latent dynamics? Is STEEL nearly optimal under this assumption, or is there potential for further improvement?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* The paper is clearly written, and the analysis of the key result - specifically, the sample complexity of STEEL being polynomial in the latent space size - is supported by solid mathematical arguments. The algorithm's description is intuitive and effectively conveys its core concepts.\n* Furthermore, representation learning from a single episode has been a long-standing interest in the RL community, making this paper's contribution highly relevant to the field.\n* The paper provides a comprehensive literature review, effectively demonstrating the novelty of the work and differentiating it from recent existing works."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a representation learning method for Ex-BMDP called STEEL. This method identifies the small latent state space of Ex-BMDP - which encodes the essential controllable part of the MDP - while jointly learning an encoder that maps observations to the latent state. Notably, this approach can be applied without requiring \"reset\" commands, allowing the algorithm to learn from a single trajectory. The key idea is to repeat sequences of actions to detect cycles in the latent state space, which enables the collection of multiple i.i.d. samples to discover the latent space structure. The sample complexity of the algorithm is shown to be polynomial in the size of the latent space, the mixing rate of the Markovian exogenous process, and the complexity of the encoder function class. The algorithm is demonstrated on two problem scenarios."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The method relies on several assumptions, particularly concerning the latent state space $\\mathcal{S}$. For example, the assumptions of deterministic latent dynamics and the reachability condition of the latent state space are critical for STEEL's CycleFind to function. Addressing these assumptions seems non-trivial, and overcoming them is posed as future work.\n* Although the sample complexity of STEEL is polynomial in the size of the latent state space, the numerical simulations show that a substantial number of samples (millions) are required."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- The paper presumes the availability of an encoder hypothesis class $\\mathcal{F}$, where the true decoder $f(x)$ is included, and the final complexity depends on the size $\\log\\mathcal{F}$. However, it does not seem to specify how to choose this hypothesis class. The simulation section gives an example of $\\mathcal{F}$ that is specific to the examples. Is there any general procedure for selecting $\\mathcal{F}$ with a reasonable size that also guarantees to include the correct decoder? In general settings beyond the specific examples given in the paper, can you provide guidelines or heuristics for selecting an appropriate hypothesis class?\n- Given $\\mathcal{F}$, the paper also assumes access to a training oracle that optimally distinguishes two sets of observations (e.g., similar to minimizing 0-1 loss). What would be an example of such an oracle without prior knowledge of the true classifier? And what is the sample/computation cost of constructing such an oracle?\n- The STEEL algorithm assumes access to an upper bound on the mixing time $t_{mix}$ for the exogenous dynamics. For a general setting with unknown exogenous latent factors and dynamics, how do you get such an upper bound? Can you discuss potential methods or heuristics for estimating or bounding the mixing time in settings where the exogenous dynamics are not fully known?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The class of Ex-BMDP studied in this paper is a general class of structured POMDPs. It captures problems where, despite having high dimensional observation, the majority of the states are exogenous and only a small controllable state matters for learning. It therefore allows more sample-efficient learning by filtering out the exogenous factors and reducing to a smaller MDP depending only on the controllable states. Such setting fits many applications and gives insight to how to best exploit these hidden structures to optimize learning.\n- The main novelty of this paper compared with prior work in Ex-BMDP is that instead of the episodic setting in Efroni et al. (2022) where one gets to reset to starting state, it assumes the agent interacts with the environment in a single episode. This setting is more challenging given it is more difficult to collect samples of a given latent state without the episodic resets. \n- The paper also assumes a more general assumption on the state and emission function, where only the partial inverse with respect to the controllable state exists, but places no such assumption on the exogenous states. This is more general than assumption a block structure in prior works which allows a full inverse from observation to state."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies a structured class of MDPs called an Ex-BMDP, where the latent factors of the observations decompose into a lower-dimensional controllable factor (which evolves deterministically according to the agent's action) and high-dimensional exogenous factor (which evolves independent of actions). This paper focuses on the single-episodic setting and proposes sample-efficient algorithms for learning controllable dynamics of an Ex-BMDP with sample complexity that depends only on the sizes of the low-dimensional controllable state, the encoder function class, and the mixing time of the exogenous noise factor. The paper also empirically tests the proposed STEEL algorithm on the infinite-horizon variations of the \"combination lock\" and \"multi-maze\" environments."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The proposed algorithm is highly dependent on the assumptions that (1) the dynamics of the latent controllable states is deterministic; (2) the mixing time of the exogenous dynamics. Intuitively, assumption (1) leads to to a cycle of latent states of bounded length that is repeatedly visited and allows repeated collection of the same latent state, which on a high-level is similar to \"resetting\" the environment; assumption (2), given the looping behavior, can wait out the mixing time of the exogenous dynamics and collect near i.i.d. samples of each latent state. However, assuming deterministic dynamics and bounded mixing time seems restrictive, and possibly does not capture many practical setting. How sensitive is the algorithm to the violation of both assumptions? Does non-deterministic dynamics of the controllable latent states break the proposed algorithm?"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose a provably sample-efficient algorithm for learning controllable representations in the Exogenous Block MDP setting, in the case where data is collected in a single trajectory with no state resets."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning a Fast Mixing Exogenous Block {MDP} using a Single Trajectory},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=41WIgfdd5o},\nnote={under review}\n}"
},
"abstract": {
"value": "In order to train agents that can quickly adapt to new objectives or reward functions, efficient unsupervised representation learning in sequential decision-making environments can be important. Frameworks such as the Exogenous Block Markov Decision Process (Ex-BMDP) have been proposed to formalize this representation-learning problem (Efroni et al., 2022b). In the Ex-BMDP framework, the agent's high-dimensional observations of the environment have two latent factors: a controllable factor, which evolves deterministically within a small state space according to the agent's actions, and an exogenous factor, which represents time-correlated noise, and can be highly complex. The goal of the representation learning problem is to learn an encoder that maps from observations into the controllable latent space, as well as the dynamics of this space. Efroni et al. (2022b) has shown that this is possible with a sample complexity that depends only on the size of the controllable latent space, and not on the size of the noise factor. However, this prior work has focused on the episodic setting, where the controllable latent state resets to a specific start state after a finite horizon.\n\nBy contrast, if the agent can only interact with the environment in a single continuous trajectory, prior works have not established sample-complexity bounds. We propose STEEL, the first provably sample-efficient algorithm for learning the controllable dynamics of an Ex-BMDP from a single trajectory, in the function approximation setting. STEEL has a sample complexity that depends only on the sizes of the controllable latent space and the encoder function class, and (at worst linearly) on the mixing time of the exogenous noise factor. We prove that STEEL is correct and sample-efficient, and demonstrate STEEL on two toy problems."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Reinforcement Learning",
"Reinforcement Learning Theory",
"Controllable Representations",
"Representation Learning",
"Exogenous Noise",
"Controllable Latent State",
"Unsupervised Reinforcement Learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/3ba5736ce6fee3d4ec38757c7dbfb2a288e66746.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/82d59b26dd980b77c20a59e34c7119949da914a0.zip"
},
"title": {
"value": "Learning a Fast Mixing Exogenous Block MDP using a Single Trajectory"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
41uZB8bDFh | Durable Quantization Conditioned Misalignment Attack on Large Language Models | main | Active | LLM Safety Alignment;Quantization Conditioned Attack | alignment, fairness, safety, privacy, and societal considerations | 3;5;6 | 5;4;3 | 1;2;3 | 2;2;3 | 1;3;3 | 4.666667 | 4 | 2 | 2.333333 | 2.333333 | -0.981981 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Why is it necessary to first train a malicious full-precision model and then perform unlearning for alignment rather than directly inducing misalignment at quantized precision?\n2. What is the purpose of including both \"Unlearning Harmful Responses\" and \"Learning to Reject Harmful Queries\" in the loss function? Does each component contribute independently?\n3. Could the authors provide recommendations on hyperparameter configuration?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper is well-organized and clearly structured.\n2. The topic is innovative. Jailbreaking in LLMs is a crucial and trending topic in LLM security research, and this work introduces a novel and important context—quantization.\n3. Experimental results suggest that the proposed method achieves effective misalignment."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces the Quantization Conditioned Misalignment (Q-Misalign) Attack, a novel method that exploits vulnerabilities introduced during the quantization process of LLMs. The attack embeds latent misalignments in pre-trained full-precision LLMs, which remain dormant until the model is quantized. Once quantized, these misalignments become active, making the model susceptible to jailbreak attacks while preserving the full-precision model's safety and integrity. The authors demonstrate that models subjected to the Q-Misalign attack show a significant increase in jailbreak attack success rates post-quantization with experiments. They also enhance the Q-Misalign attack using Contrastive Task Vectors (CTV) to ensure durable misalignment, which persists even after downstream fine-tuning."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I. Clarifications Needed on the Threat Model\n\na) The authors describe a scenario where users download models from open-source platforms for further development and deployment. Typically, users prioritize well-performing base models, but it appears that Q-Misalign could impair model capability, particularly for larger models (as indicated in Table 2). Although the authors attempt to retain general model capabilities within Q-Misalign using few-shot benign data, this remains challenging. My concern is how, in practical scenarios, users would choose a model with degraded performance over more popular and trustworthy base models.\n\nb) The authors state that the attack goal is to achieve “stealth misalignment”, where the model appears safe in full precision but responds to most malicious queries once quantized. This threat model is interesting. However, my question is whether users, who develop products for local devices using quantized models, would not detect the poor security of the model through simple tests (e.g., querying popular benchmarks like advbench). Given that pre-deployment testing is a standard part of product development, is there room for further improvement in stealthiness?\n\nII. Methodological Design Considerations\n\na) The paper proposes first fine-tuning on harmful datasets to create a malicious model, then employing unlearning to produce an ostensibly safe full-precision model. Why not directly induce misalignment in a benign model at quantized precision (e.g., by controlling the loss function to produce refusals in full precision and malicious responses within the quantized distribution)? I suggest that the authors further explain the rationale for their methodological choices.\n\nb) The paper incorporates both “Unlearning Harmful Responses” and “Learning to Reject Harmful Queries.” These objectives appear to have significant overlap. Could the authors clarify the distinct contributions of each?\n\nIII. Ambiguity in Terminology\n\nThe authors introduce “Q-Misalign” in Sec 4.1, followed by “Q-Misalign with CTV.” in Sec 4.2. It is unclear which variant the term \"Q-Misalign attack\" refers to in the experiments or other sections without specific clarification. This ambiguity is confusing and warrants clarification.\n\nIV. Lack of Ablation Study\n\nQ-Misalign involves multiple stages, components, and hyperparameters. Phase 2, in particular, incorporates four loss components. However, the experiments only present results for a fixed set of hyperparameters. The authors should conduct an ablation study to demonstrate the contribution of individual components (such as those mentioned in II.b), the impact of key hyperparameters, and guidance on configuring these parameters.\n\nV. Robustness of the Full-Precision Model’s Alignment\n\nThe evaluation of the full-precision model’s alignment relies on simple benchmarks such as AdvBench. Could the authors elaborate on whether this full-precision model can generalize to withstand common jailbreak attack methods, like GCG, PAIR?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "See the implicit questions included in my weaknesses section."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "- Local quantization of LLMs is a wide-spread practice, and studying its security risks is an important problem.\n\n- Extending prior works’ threat model to include also potential benign fine-tuning of the LLM before quantization is interesting and makes the attack more challenging.\n\n- Proposing contrastive task vectors for enhancing the durability of the attack over downstream fine-tuning is a promising idea."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces Q-Misalign, a quantization conditioned misalignment attack on LLMs. Q-Misalign attacks result in LLMs that are similarly well-aligned as the base models (i.e., hard to jailbreak), but once quantized, the LLM is easy to jailbreak. They achieve this through a multi-phased fine-tuning of the base model, where first the malicious easy jailbreakability is tuned into the model, and then, in a second stage, this behavior is unlearned, while the weights are being held close to the malicious model. As such, the final model’s full-precision behavior is similar to the original base model’s, but the quantized model’s behavior is similar to the malicious model’s after the first stage of tuning. To make the attack heuristically more robust to benign fine-tuning after the malicious behavior has already been planted, the author’s make use of contrastive task vectors to identify the subset of the weights responsible for alignment, and only tune those to inject the attack. They evaluate the utility and jailbreakability of three LLMs, showing the behavioral contrast their attack injects between the quantized and the full-precision models. Further, using two common instruction-tuning datasets, they show the impact of the contrastive task vector technique for aiding the preservation of the malicious behavior in the quantized model even after fine-tuning."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Unfortunately, the work has several key weaknesses.\n\n**Overclaimed novelty**\n\nThe author’s claim that their attack uncovers “a novel threat in which safety misalignment remains dormant in a full-precision LLM but becomes exploitable post-quantization” (abstract). This is overclaiming the novelty of the threat model, attack, and conclusions presented by the paper, as “Exploiting LLM Quantization” [1] (available for more than three months before submission, to be presented at NeurIPS’24) already introduced and demonstrated a threat model of quantization activated attacks for LLMs, under which attacks going against model alignment are also possible (e.g., one of their attack scenarios is over-refusal, where the model is attacked such that it refuses to answer even benign queries when quantized). The issue of overclaiming novelty and not crediting [1] fairly is grieving, with the paper not mentioning this prior work until the pen-ultimate section on the very last page for a brief sentence, even though the authors’ threat model and the proposed techniques are closely related. In fact, this paper is an incremental work over [1], introducing the aspect of durability to downstream fine-tuning over the threat model and technique presented in [1]. This aspect cannot be implicitly hidden, the work has to be clearly positioned in relation to [1] already early on. Further, prior quantization conditioned attacks in other domains (e.g., [2] in computer vision), also have to be correctly credited. \n\n**Overclaimed technical contribution**\n\nAt several points, the paper claims that the contrastive task vector technique “ensures” or “guarantees” that the attack remains effective after fine-tuning by the user (outside of the control of the attacker). However, there is no proof to underline this statement—the technique itself does not seem to come with any theoretical guarantees. Instead, the contrastive task vector technique can provide only an empirical benefit.\n\n**Doubts over the correctness and presentation of certain claims and techniques**\n\nApart from inaccurately claiming that the contrastive task vector technique would guarantee the durability of the attack, there are some other technical correctness and clarity issues in the paper.\n\nFor instance, on page 5 and in Figure 2 the authors present an example of how the attack works on the weight distribution of the model. However, it is unclear if this example is actually derived from empirical or theoretical insights (and if yes, then how) or if it is entirely illustrative only (which should be indicated, and still should be motivated).\n\nAs another example, in the paragraph around Equation 6, the authors introduce their technique for maintaining the malicious behavior in the quantized model post-repair. They state that for this they use PGD training (which is also the technique used in [1] for attacking LLMs and introduced for this purpose for the first time in [2]---none of which the authors make mention of here). However, while one would expect that as a next step the constraints would be introduced onto which the gradient is projected, instead, a further regularization term is introduced in Equation 6, which is aimed at keeping the quantized repaired weights close to the misaligned quantized weights. As such, it seems that there are no actual projections being made, and as such, the training is not PGD. Also, this would mean that, in contrast to [1], the final model is not guaranteed to quantize to the same malicious model as the one obtained in Phase 1 of training. Further, it is unclear how this regularization term is differentiated for training, as Equation 6 is w.r.t. the quantized weights, which are per default not differentiable.\n\nThe *Model Quantization* paragraph in Section 2 also contains certain inaccuracies, wrongly stating that all quantization schemes can be written as in Equation 1 (even ignoring dynamic or optimization-based quantization, the quantization alphabets of static schemes also may vary, e.g., the difference between NF4 and FP4).\n\nFinally, the paper makes some poorly-founded statements at several places. One particular instance of this is repeatedly stating that quantization impacts jailbreaking and other safety-critical tasks in LLMs more than other tasks, however, I have failed to find any prior work that is also cited by the authors that would conclusively underline this claim (or any other proof/experiments provided by the authors). Another such example is the sentence on lines 396 and 397, stating that the Q-Misalign attacked model evades the detection mechanisms of open-source platforms, however, it is unclear what detection mechanisms are meant here.\n\n**Lack of comparison to prior work and limited evaluation**\n\nEven though given the similarities I have explained above, the authors do not compare their proposed attack to [1] neither on a technical level nor in their experiments.\n\nTo show the preservation of utility in the models, they only conduct utility evaluations on a single benchmark, TruthfulQA. On this, there is some performance drop to be observed. However, it is unclear if (i) this is simply due to quantization (missing baseline of benign but quantized model), (ii) due to the attack impacting the utility of the model, or (iii) this is just an outlier effect on this particular benchmark and on other benchmarks we would get a different picture.\n\nIt is also unclear why the ICL-based defense performs so poorly. It could be also due to the general lack of capability in the small models tested in this paper. This possibility would warrant a more thorough examination.\n\nThere are no details given on how the contrastive task vectors are found. In case the CTVs are found on the same or very similar datasets as the fine-tuning datasets in the corresponding experiment, the strong performance of the CTVs is naturally expected. Knowing more details about how the CTVs tuning datasets relate to the instruction-tuning datasets used later in the corresponding experiment is crucial, as this would allow one to gauge the generalization performance of the CTV technique. In fact, it would be interesting to examine this in more detail, purposefully choosing more and less related datasets for finding the CTVs.\n\nFurther, in the same experiment, there are no details given about the fine-tuning of the model. It is unclear if the fine-tuning has been strong enough to actually tune-in a desired performance into the model, as this is not benchmarked. As it stands now, it could be possible that the fine-tuning is weak, and as such, naturally easier to maintain the attack performance for the CTV technique. Ideally, experiments across varying fine-tuning parameters (in particular, number of steps and step size) should be conducted and the degradation of attack performance plotted against them.\n\n**References**\n\n[1] K Egashira, M Vero, R Staab, J He, M Vechev. Exploiting LLM Quantization. NeurIPS 2024.\n\n[2] H Ma, H Qiu, Y Gao, Z Zhang, A Abuadbba, M Xue, A Fu, Z Jiliang, SF Al-Sarawi, D Abbott. Quantization backdoors to deep learning commercial frameworks. IEEE Transaction on Dependable and Secure Computing 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Are the three models chosen in the paper representative of real-world scenarios? However, larger models with greater parameter counts (e.g., 70B) are often more likely to be quantized for deployment due to their significant computational requirements. Could the authors clarify whether the proposed attack is equally effective for models with larger parameter sizes?\n2. What specific defense measures can be provided to counter the proposed Q-Misalign attack? Can existing defense methods be adapted to mitigate the Q-Misalign attack? If so, what modifications would be necessary?\n3. Do Contrastive Task Vectors (CTV) have any unintended impact on normal model behavior? Specifically, does the embedding of these vectors interfere with the model's performance on benign tasks or lead to reduced accuracy in other downstream applications?\n4. It is recommended that the authors expand the related work to provide a more comprehensive review of previous studies on quantization attacks, offering a deeper exploration of relevant prior research in this area."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Introduces the Q-Misalign attack, a novel attack that specifically exploits vulnerabilities that emerge after quantization, and highlights weaknesses in existing safety measures for quantized models.\n2. Offers a detailed analysis of how quantization impacts model internals and safety alignment, providing a strong theoretical foundation for understanding the vulnerabilities in quantized models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces the Quantization Conditioned Misalignment (Q-Misalign) Attack, a novel vulnerability targeting LLMs during quantization. Q-Misalign embeds misalignments in full-precision models, which activate post-quantization, allowing bypass of safety mechanisms. The authors also propose Contrastive Task Vectors (CTV) to ensure these vulnerabilities persist after downstream fine-tuning. Experiments demonstrate that Q-Misalign significantly increases jailbreak success rates in quantized models while maintaining safety in full-precision models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper focuses on relatively small LLMs (up to 7 billion parameters), which may not fully capture the behavior of larger state-of-the-art models. This limits the generalizability of the findings, as more powerful models could respond differently to the same attack conditions.\n2. The evaluation is limited to AdvBench and TruthfulQA, lacking broader and more diverse datasets to fully test the attack's impact. Additionally, there are insufficient details on reproducing the In-Context Learning (ICL) experiments, including the specific prompts used.\n3. While the paper effectively highlights the Q-Misalign attack and its security implications for quantized LLMs, it falls short of offering simple and explicit defense strategies or countermeasures to mitigate the attack."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This paper presents the Q-Misalign attack, a method that stealthily introduces vulnerabilities in full-precision models, which only manifest after quantization, compromising model safety in edge deployments."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024durable,\ntitle={Durable Quantization Conditioned Misalignment Attack on Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=41uZB8bDFh},\nnote={under review}\n}"
},
"abstract": {
"value": "As large language models (LLMs) are increasingly deployed on resource-constrained edge devices, quantization techniques have been widely adopted to reduce model size and computational requirements. However, this process can expose models to new vulnerabilities. In this work, we introduce the Quantization Conditioned Misalignment (Q-Misalign) attack, a novel threat in which safety misalignment remains dormant in a full-precision LLM but becomes exploitable post-quantization. We demonstrate that our Q-Misalign attack effectively bypasses safety mechanisms and enables the generation of harmful content in quantized models while maintaining full-precision performance. Furthermore, we propose a contrastive task vector-based approach to enhance attack durability, ensuring that vulnerabilities persist even after downstream fine-tuning. Experimental results show that Q-Misalign attack significantly increases jailbreak success rates in quantized models, while preserving model utility and safety alignment in full precision. Our findings highlight a critical gap in current LLM safety measures and call for more robust defenses in quantization-aware scenarios."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"LLM Safety Alignment",
"Quantization Conditioned Attack"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/07c0066279c81b4e0a134cd18083a1b9a5852e79.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Durable Quantization Conditioned Misalignment Attack on Large Language Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
421D67DY3i | Demystifying Online Clustering of Bandits: Enhanced Exploration Under Stochastic and Smoothed Adversarial Contexts | main | Active | clustering of bandits;linear bandits;online learning | learning theory | 5;5;6;8 | 4;3;3;4 | 2;3;3;4 | 2;2;3;3 | 4;3;2;4 | 6 | 3.5 | 3 | 2.5 | 3.25 | 0.408248 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Could the authors explain on the novelty of the theoretical analysis?\n2. What happens when $\\tilde{\\gamma}$ is very small but $\\gamma$ is large? Can $\\tilde{\\gamma}$ be estimated in some ways?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper improves the practicality of the algorithms for the clustering bandit problem by relaxing the strong assumption and analyzing the adversarial context setting.\n2. Experiments are abundant to validate the performance of the proposed algorithms."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes an algorithm for the linear bandits with clustered users, relaxing assumption on the data diversity and achieving less regret incurred by mis-clustering under both stochastic and smoothed adversarial context settings. Empirical performances of the proposed algorithms shows the efficacy and practicality on the real datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The theoretical analysis seems to have limited novelty and heavily relies on the previous theoretical results.\n2. Some parts of the presentation is hard to read, i.e., variables are defined under Table 1, the regret bounds are stated before defining the variables $\\tilde{\\gamma}$, $u$, and etc...\n3. It would be better to prove an $\\tilde{O}(T^{2/3})$ lower bound to show the impossibility results when the $\\tilde{\\gamma}$ is unknown."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see my questions above.\n\nAdditionally, could the authors provide more details on the application of the self-normalized bound from Abbasi-Yadkori et al. (2011) to the first term in Equation 1? Specifically, I would appreciate a clearer explanation of how the filtration is defined in this context, given that the summation is taken over a set of time steps corresponding to the cluster, which is itself random (cluster estimation is based on observed noisy feedback)."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Relaxation of the strong context regularity assumption adopted in online clustering bandits is a well-motivated and important problem.\n\nThe paper is technically sound and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies online clustering of bandits, with the goal of relaxing the strong context regularity condition adopted in prior works like Gentile, et al. (2014). \nFor this purpose, the authors first added a uniform exploration phase to the existing algorithms. With the additional knowledge about the gap parameter $\\gamma$, an appropriate value for the uniform exploration length $T_{0}$ can be chosen to ensure accurate cluster estimation.\nOn a parallel direction, the authors adopted the perturbed adversary assumption as Kannan, et al. (2018), and showed that CLUB and SCLUB algorithms by Gentile, et al. (2014) and Li, et al. (2019) now incurs less regret due to failed cluster detection (the first term in regret upper bound)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. My primary concern with this paper is its contribution, as it may overstate the extent to which it relaxes the assumptions used in prior works. I wouldn’t consider adding the gap parameter as a relaxation. With this parameter as input, we can incorporate a dedicated uniform exploration phase of sufficient duration (determined by the gap parameter) to ensure accurate cluster estimation. Gentile et al. (2014) required the additional assumption on variance precisely because they did not have access to such information. Without knowledge of the gap, it is impossible to determine the adequate amount of uniform exploration, necessitating a stronger assumption to ensure that the minimum eigenvalue of the design matrix grows rapidly enough, even under UCB exploration.\n\n2. The use of the perturbed adversary assumption in the context of online clustering of bandits appears to be novel, and I believe it represents a meaningful contribution of this paper. However, the discussion on the technical innovation here is limited, and I would appreciate more clarity on this aspect from the authors.\nFor instance, since a key part of the proof involves demonstrating that the minimum eigenvalue of the design matrix grows sufficiently under the perturbed adversary assumption (as shown in Lemma 11), how does this approach differ technically from Lemma 3.2 of Kannan et al. (2018)? Could the authors elaborate on the primary differences in the proof structure when applied to online clustering of bandit algorithms as opposed to greedy algorithms?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "NA"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "This work provides slightly looser regret bounds for both algorithms. The reviewer conjectures that the regret bounds can be tighter by revising the existing proof techniques. \n\nLogarithmic regrets are known to be attainable for standard contextual bandits under some assumptions. Do the authors think that the regret bounds can be improved to logarithmic ones under some specific assumptions? If not, could we have a square root lower bound? \n\nIt is possible that UCB-based algorithms cannot have logarithmic ones. Is it possible for other types of algorithms?\n\nIf logarithmic regret bounds are not attainable, could the authors please explain what differences between standard contextual bandits and this work make it infeasible?\n\nL219: Could the authors please clarify with respect to which variables the expectation is taken? \n\nFor typical adversarial setups, as contexts are not random, they cannot have expectation and variance. Could you explain more specifically about the adversarial setup that the authors suggest?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "This paper is presented well so that uninitiated readers can understand the work. Also, this work removes the nonsense assumptions in the existing literature and suggests a new set of assumptions for better analysis."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores the clustering of contextual bandits in both stochastic and adversarial context settings, introducing a new set of assumptions that improve upon some unrealistic assumptions in existing work. The two proposed algorithms are modifications of existing ones, incorporating pure exploration periods. Regret bounds for these algorithms for two setups are also provided."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The statement in L54-61 should be revised as it can be misleading. This is because the iid and minimum eigenvalue assumptions are still chosen in this work. (The assumptions for adversarial setup should be separately stated.) In addition, this work adopts the bounded context ||X||\\leq L, which is stronger than the subGaussian assumption for contexts. Lastly, this work has additional assumptions about the parameters, which\\|\\theta_i - \\theta_j\\|>gamma. Even though the reviewer acknowledges that the subGaussian assumption (\\sigma^2 < lambda_x/8log 4K) in the existing literature does not make sense, the reviewer believes that more substantive contributions beyond introducing new assumptions with an initial pure exploration period are necessary to be sufficiently impactful.\n\nL198-207: the indices should be corrected.\n\nAs accurate clustering is a crucial topic in this work, the authors should state the algorithm for clustering, rather than simply referring to it."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Why didn't you test the performance of your algorithms for the synthetic data experiment against benchmarks such as Gob.Lin [Cesa-Bianchi et al., 2013] and GraphUCB [Yang et al., 2020]?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The writing and presentation of the paper is done with a high quality. The paper highlights an interesting theoretical change w.r.t. the state-of-the-art. It can be even interesting from a practical point of view for real-world applications in large scale."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper provides novel algorithms to solve the online clustering of bandits problem. Their setting relies on a slightly different set of assumptions w.r.t. the state-of-the-art. They provide theoretical analysis of their algorithms and also provide a proper experimental analysis to show the performance of their approach. Their approach shows little improvement over the performance of state-of-the-art algorithms in the experiments."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I am not sure if it is a weakness of the approach, but it seems [from the experiments] that the presented algorithms do not improve upon state-of-the-art significantly. In addition, I believe the authors could test the performance of their algorithms against more benchmarks such as Gob.Lin [Cesa-Bianchi et al., 2013] and GraphUCB [Yang et al., 2020], but it is not done."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce improved algorithms for online clustering of bandits by incorporating a novel exploration phase, resulting in better regret upper bound while using substantially weaker assumptions."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024demystifying,\ntitle={Demystifying Online Clustering of Bandits: Enhanced Exploration Under Stochastic and Smoothed Adversarial Contexts},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=421D67DY3i},\nnote={under review}\n}"
},
"abstract": {
"value": "The contextual multi-armed bandit (MAB) problem is crucial in sequential decision-making. A line of research, known as online clustering of bandits, extends contextual MAB by grouping similar users into clusters, utilizing shared features to improve learning efficiency. However, existing algorithms, which rely on the upper confidence bound (UCB) strategy, struggle to gather adequate statistical information to accurately identify unknown user clusters. As a result, their theoretical analyses require several strong assumptions about the \"diversity\" of contexts generated by the environment, leading to impractical settings, complicated analyses, and poor practical performance. Removing these assumptions has been a long-standing open problem in the clustering of bandits literature. In this work, we provide two partial solutions. First, we introduce an additional exploration phase to accelerate the identification of clusters. We integrate this general strategy into both graph-based and set-based algorithms and propose two new algorithms, UniCLUB and UniSCLUB. Remarkably, our algorithms require substantially weaker assumptions and simpler theoretical analyses while achieving superior cumulative regret compared to previous studies. Second, inspired by the smoothed analysis framework, we propose a more practical setting that eliminates the requirement for i.i.d. context generation used in previous studies, thus enhancing the performance of existing algorithms for online clustering of bandits. Extensive evaluations on both synthetic and real-world datasets demonstrate that our proposed algorithms outperform existing approaches."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"clustering of bandits",
"linear bandits",
"online learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/3ab0e3698208154b280429627c1ac8f866acc871.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning theory"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/8397980511bbd2ed8580401bdc08e4040aeb4dc5.zip"
},
"title": {
"value": "Demystifying Online Clustering of Bandits: Enhanced Exploration Under Stochastic and Smoothed Adversarial Contexts"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
42TXboDg3c | Balancing Interpretability and Accuracy: Energy-Ensemble Concept Bottleneck Models for Enhanced Concept Inference | main | Active | Energy-Based Models;Concept-Based Models;Explainable AI | interpretability and explainable AI | 3;3;5;5 | 4;5;5;4 | 1;2;3;1 | 2;2;2;2 | 2;1;2;3 | 4 | 4.5 | 1.75 | 2 | 2 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Currently, I am a bit borderline with this paper’s decision, given some of my concerns with the fairness of its evaluation. However, I am absolutely happy to be convinced that some or all of my conclusions are wrong and to change my recommendation based on a discussion with the authors. For this, the following questions could help clarify/question some of my concerns:\n\n1. **(Critical)** The intervention results in Figure 4 seem a bit surprising, particularly for CEMs. Are you randomly intervening on the concept embeddings when training CEMs (as indicated in the original CEM paper)? If so, do you have an intuition as to why the interventions in CUB look very different to those seen in the CEM paper, its IntCEM follow-up work [1], the original ECBM paper [2], and other previous works (e.g., [3])? If random interventions are not done during training (i.e., *RandInt* is not used as expected), do you have a sense of how results in Figure 4 and Table 1 change for CEMs when CEM's RandInt is used during its training? If no random interventions are performed during training for CEM, at the very least, I would strongly suggest that this work should make it very clear that the used “CEM” baseline is not the same as the one the original work proposed.\n2. **(Critical)** More generally, and more importantly, I am concerned with how fairly other baselines were studied during the evaluation. The fact that CelebA, the one dataset where EE-CBM is underperforming compared to other baselines, is *without justification* pushed to the Appendix and not even discussed in the main body should be reason for concern. Can you please elaborate on why results on this dataset were pushed to the Appendix and why they are not discussed in the main body of the paper?\n3. **(Critical)** What were the hyperparameter values tested during training for all baselines? These are all missing and not discussed (only EE-CBM’s *selected* hyperparameters are discussed in Appendix B). Were hyperparameters selected based on best validation accuracy or test accuracy? This is not entirely clear in Appendix B, yet it makes a huge difference in terms of the fairness of the evaluation, and it is necessary for reproducibility.\n4. **(Major)** Related to the question above, how does EE-CBM’s performance change as one varies its hyperparameters? Given a large number of hyperparameters this model has ($\\lambda_c$, $\\lambda_y$, $\\lambda_e$, $\\lambda_\\text{mmd}$, $\\lambda$ for Langevin dynamics, concept embedding size $u$, etc.), I believe it is key to have this sort of information somewhere in the paper and, at the very least, a guideline on how to select these values in practice.\n5. **(Major)** Could you please elaborate on why introducing the energy-based pathway was useful/needed in the first place? I can definitely see that it helps (which is great!), but I believe the motivation for why such a path was needed in the first place is missing in the paper. Is there any way to frame it to make it immediately clear that an energy-based pathway for concept prediction is needed?\n6. **(Major)** Related to the previous question: I have some hesitations about the continuous claim in this work that just because a method uses a higher-capacity model, it is less interpretable (see section 2.1 for examples where this claim is made several times). Regardless of how a model generates an explanation (i.e., whether it does this using a white box model or a highly-parametric complex model), if this explanation is (1) accurate, (2) reflective of the downstream task (i.e., it contains all the necessary information to describe the downstream label), (3) composed by human-understandable units of information (i.e., concepts), and (4) actionable (e.g., you can perform interventions or counterfactuals to see how the final decision changes), then I do not see why it matters whether it was generated by a large complex black box model or a simple white box model. Could you please elaborate on why using complex backbones to predict concepts is worse when all of the goals mentioned above, which are the goals of most, if not all, CBM-like approaches, are satisfied? And in that case, why is this argument not applicable to using a complex backbone for EE-CBM’s $f(\\mathbf{x})$ function or a complex MLP for its energy function?\n7. **(Major)** What is the intuition behind EE-CBM’s better generalization to background shifts? From the text, it is unclear why, intuitively, this must be the case.\n8. **(Major)** Do you have a sense as to how this method would perform when dealing with concept incompleteness in the training set? This is a key factor to consider/evaluate if one is to know how this approach can be used in real-world tasks where concept annotations may not be sufficient to explain the downstream task fully.\n9. **(Major)** Are concept uncertainty labels used during training (e.g., in CheXpert)? This seems to be implied when talking about Table 1’s results in Section 4.1. However, it is not explicitly indicated or discussed anywhere in the main body of the paper.\n10. **(Minor)** Could you please elaborate on the computational training cost of introducing the energy-based pathway in this model?\n11. **(Minor)** What does Figure 5 provide that Table 1’s concept accuracy column does not already provide? I might’ve misunderstood something here but I am not entirely sure what the key message of Figure 5 is, as it is unclear how those examples were selected and how that shows that the model truly “understood” a concept (it could just predict a concept’s value entirely from spurious correlations without having to really understand it).\n12. **(Minor)** Why is it claimed that EE-CBM is a “concept scalar model” when, in reality, it still generates a high-dimensional concept representation for each concept that is only afterwards gated by the scalar probability? Am I misunderstanding something here?\n\n### Minor Suggestions and Typos\n\nWhilst reading this work, I found the following potential minor issues/typos which may be helpful when preparing a new version of this manuscript:\n\n1. **(Potential Typo)** In line 48, “CEM is modified CBM networks” should probably be “CEM is a modified CBM network”.\n2. **(Potential Typo)** In line 262, “… hidden connections between concepts are learned and representation is improved” should probably be something along the lines of “… hidden connections between concepts are learned and their representation is improved”\n3. **(Clarity, IMPORTANT)** Is a sentence missing in line 286? It jumps to equation 10 without any preamble or explanation.\n4. **(Nitpicking, notation)** In equation (10), it seems that an upper case $\\Sigma$ is used for the summation notation rather than Latex’s standard \\sum command (e.g., $\\sum_{k=1}^K$).\n5. **(Clarity)** When talking about high-dimensional concept representations, using the word “concept” to mean both the actual concept and its representation can complicate the reading (e.g., as in Section 3.1 where “concept” is used to mean a concept’s high dimensional representation and the actual concept). Instead, I would suggest using “concept representation” or “concept embedding” when talking about a specific concept’s high-dimensional representation.\n\n## References\n\n- [1] Espinosa Zarlenga, Mateo, et al. \"Learning to Receive Help: Intervention-Aware Concept Embedding Models.\" NeurIPS (2023).\n- [2] Xu, Xinyue, et al. \"Energy-based concept bottleneck models: unifying prediction, concept intervention, and conditional interpretations.\" ICLR (2024).\n- [3] Collins, Katherine Maeve, et al. \"Human uncertainty in concept-based AI systems.\" *Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society*. 2023."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "Thank you so much for submitting this work! I enjoyed reading this paper, learned a lot from it, and appreciate the time taken to write it up and submit it to ICLR. Below are what I believe are this paper’s main strengths:\n\n1. **[Originality] (Critical)** The idea of introducing an energy-based pathway to concept prediction, on top of a standard concept representation learning pathway, is a clear novel use and extension of ideas in previous concept-based models. As such, I believe this work is certainly novel and may be of potential interest to the rest of the community.\n2. **[Significance] (Major)** The paper's main purpose, accurately and interpretably predicting concepts and tasks for CBM architectures, is an important and highly active area of research. If it is proven to work as expected, this work has the potential to be impactful.\n3. **[Quality and Clarity] (Minor)** The method is very well explained and written. Moreover, the paper is very well placed within the CBM and XAI literature. I would mark this as a major strength if it weren't for the lack of motivation to explain why energy-based prediction is the best way/approach to achieving this paper's goals.\n4. **[Quality and Significance] (Minor)** The method is evaluated across a multiplicity of datasets against several key baselines, where it is shown to outperform existing baselines. Therefore, this work provides large amounts of evidence in favor of the proposed method's effectiveness. I would mark this as a “critical” strength if it weren’t for some major concerns I have regarding how some of the baselines may be evaluated (see below).\n5. **[Significance] (Minor)** The paper provides the code and configs needed to reproduce the EE-CBM results in this paper. It is therefore taking the necessary steps to ensure reproducibility; however, it could benefit from also including details/code to reproduce the remaining baselines used during evaluation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces Energy Ensemble Concept Bottleneck Models (EE-CBMs). EE-CBMs employ a combination of energy-based concept prediction and traditional concept representation prediction to improve the concept predictive performance of CBMs, thereby improving their generalization and downstream task performance. By learning concept embeddings and gating them with their learn probabilities while incorporating a residual channel from the input, EE-CBMs can achieve high concept and task accuracies across several tasks while being receptive to concept interventions and better generalizing across distribution shifts on their inputs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "In contrast, I believe the following are some of this work’s limitations:\n\n1. **[Quality and Significance] (Critical)** I have some major concerns regarding the fairness of the evaluation against existing baselines. These concerns include (1) the fact that some results for some of the baselines seem to **contradict those seen in previous works** (including the original energy-based CBM and CEMs), without any explanation for the discrepancy and (2) the fact that CelebA, a dataset where EE-CBM seems to be underperforming, is, for some reason **pushed to the appendix without any justification or even mention in the main body**. Moreover, given that there is no mention of how hyperparameters were selected for competing baselines, it is very difficult to judge the fairness of the evaluation, even if one is familiar with those baselines. See below for specific questions on these matters.\n2. **[Significance] (Major)** EE-CBM requires several hyperparameters to be selected ($\\lambda_c$, $\\lambda_y$, $\\lambda_e$, $\\lambda_\\text{mmd}$, $\\lambda$ for Langevin dynamics, concept embedding size $u$, etc.) yet no recommendations or ablations are provided to understand how these values affect EE-CBM’s performance and its usability. Moreover, it is unclear how the introduction of MCMC or Lavengin dynamics affects the training times of EE-CBM compared to similar baselines.\n3. **[Significance and Clarity] (Major)** The motivation behind using a combination of an energy-based pathway and a concept-representation-learning-based pathway for concept prediction is not entirely clear. I can see that it works and improves things; however, this work could be significantly more impactful if it better motivated the need for such a path and built a clear argument as to why it improves things. See below for specific questions on these matters."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "N/A"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper is well-structured and clearly written, making it easy to follow.\n- The experiments include a wide range of baselines and datasets, providing strong validation for the performance of EE-CBM in improving concept and label accuracy.\n- The design of concept extraction and concept probability branches is reasonable."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper targets strengthening concept accuracy in CBM. To address this issue, the authors propose EE-CBM, which is an approach based on energy. Specifically, EE-CBM incorporates a concept extraction branch and a concept probability branch and applies MMD as a loss to each concept embedding. Lots of empirical experiments show the effectiveness of EE-CBM."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- As the author claimed in the introduction section, CEM is proposed to address the trade-off between accuracy and interpretability, which has the same motivation as EE-CBM. Thus, the motivation in this paper is weak and the authors could offer further discussions about what CEM failed to do in addition to the methodology difference.\n- Despite that EE-CBM is devised to enhance concept accuracy, the improvement of concept accuracy is extremely incremental as shown in Table 1 compared to ECBM or Prob-CBM.\n- The authors could display some wrongly classified samples and their corresponding concept values of EE-CBM and other approaches.\n- It seems that the authors mixed the meanings of model interpretability and concept accuracy, and used these two expressions randomly. However, they are absolutely different, so I strongly suggest the authors add a paragraph to explain the relation between model interpretability and concept accuracy.\n- Fig. 3 seems to be incomplete with wrong indices."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "- What is C’? Is it the concept prediction or concept representation/embeddings?\n- What are the results of your model when reducing the number of concepts? Does it still provide high classification accuracy? For example, use CUB with only 10 randomly selected concepts for training and inference. This is necessary to demonstrate that you break the information bottleneck.\n- Why are CEM and ECBM completely unresponsive to interventions in Figure 4?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- Innovative Approach: The introduction of the energy-based concept encoder and the energy ensemble gate (EEG) is a novel approach to address the trade-off between accuracy and interpretability.\n- Strong Results: The experimental results demonstrate that EE-CBM achieves state-of-the-art performance on multiple datasets, showing significant improvements in both concept and task accuracy.\n- Energy based model presentation: although the space does not allow for extensive descriptions, the provided succinct description allows understanding the overall idea and functioning of energy-based models without checking the"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces the Energy Ensemble Concept Bottleneck Model (EE-CBM), which aims to improve the balance between interpretability and accuracy in Concept Bottleneck Models (CBMs). The EE-CBM employs an energy-based concept encoder and integrates concept values and probabilities to enhance concept inference and reduce concept uncertainty. The model is evaluated on multiple benchmark datasets and shows state-of-the-art performance in both concept accuracy and interpretability."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "## Major Issues\n- **Related work**: The presentation and comparison with existing work are insufficient. The paper lacks a dedicated related work section, and the existing background section and the comparisons provided in the introduction are inadequate.\n - The CEM is likely misunderstood by the authors; it focuses on making task predictions on concept embeddings, not on using two concept representations.\n - The claim that \"EE-CBM resolves uncertainty in concept prediction\" was already addressed by ProbCBM.\n - The statements about label-free CBMs are questionable. Concepts in these models are explained by their own semantic meaning and heatmaps can be used to explain concept predictions, in both cases just like in standard CBMs.\n- **Method Notation** Many notations are not clear or not sufficient in the method presentation:\n - $C’$: the authors define it as: “concept value $C’$ through FC layers as in conventional CBM models”, but then they say that \\phi maps to $R^u$ where “$u$ represents the dimension of the concept”, but then again $K$ concept features $C’$ are mentioned. It appears to be a concept embedding, not the concept values of conventional CBM models.\n - Is $\\phi$ a per-concept concept encoder? In that case it should have been defined indicized, $\\phi_i$ also in the mapping from $R^d$ to $R^u$.\n - The dimensions of $C$ are not specified.\n - The Concept MMD loss is crucial based on ablation study results, but it is poorly presented with statements like \"$\\mu$ is a kind of mapping function,\" which lacks scientific clarity. \n- **Metrics to Sustain Claims**\n - “Breaking the information bottleneck”: it is not sufficient to provide high classification accuracy to show that the proposed model breaks the information bottleneck of standard CBM models. The author should also provide the following metrics:\n - The information plane [1] comparing the methods in terms of mutual information between the concept representations (C) and the input (X) and label (Y).\n - Concept efficiency, to test the model performance when reducing the number of concepts as shown in CEM.\n - The “Concept Importance” experiment does not report the concept importance - commonly measured with metric like CaCE [2] to assess how much a task prediction is important for a given task. Instead, the authors only report some qualitative results with the associated concept predictions: all methods achieving good concept accuracy could report the same results.\n\n[1] Tishby, Naftali, Fernando C. Pereira, and William Bialek. \"The information bottleneck method.\" arXiv preprint physics/0004057 (2000).\n\n[2] Goyal, Y., Feder, A., Shalit, U., & Kim, B. (2019). Explaining classifiers with causal concept effect (cace). arXiv preprint arXiv:1907.07165.\n\n## Minor issues\n- **Paper presentation**: The introduction section is not well-written, particularly in the second paragraph when introducing related work. Additionally, the acronym EE-CBM is mentioned without being introduced in the third paragraph, and the use of concept embedding is mentioned without prior introduction in the fifth paragraph. The background section on CBM is poorly written and structured, with CBM data and functional representation placed in the middle of the paragraph. The MMD loss ablation study is inserted in section 4.1 before the ablation studies section 4.2. \n- **Validity of experimental results**: There are doubts about the validity of the experiments \n - Typically Bool CBM an and Fuzzy-CBM performs much worse than CEM or Prob-CBM while in your experiments they show comparable results on both CheXpert and AwA2 dataset. How do you justify this?\n - Experimental settings for compared methods are missing. The training procedures for the compared methods are not reported, even in the appendix."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. **Clarification on Figure 1(c):** The separation of concept $\\mathbf{c}$ and label $\\mathbf{y}$ in this figure appears contradictory to the joint optimization described in the Energy Concept Bottleneck Model (ECBM). What does $y'$ represent in this context, and why is it depicted as separate from $\\mathbf{c}$?\n2. **Model Performance versus Concept Accuracy (Line 65):** The authors state that \"While these methods can improve label accuracy, often struggle to infer accurate concepts\" (Line 065-066). However, these models can achieve high scores in concept accuracy (referenced in Table 1). This seems paradoxical. Can you explain the apparent discrepancy between these observations.\n3. **Balancing Task Accuracy and Interpretability:** The related work section critiques large networks for prioritizing classification over interpretability. However, the main contribution of your work claims to balance these aspects. Could you provide empirical evidence, similar to Figure 1(c) from [1], showing how your model achieves this balance?\n4. **Sensitivity to Hyperparameters (Section 3.1, Equation 9):** The authors lack detailed ablation studies on the sensitivity of the proposed method to its hyperparameters for each loss component, particularly $\\lambda$. The appendix focuses only on the concept loss hyperparameter $\\lambda_c$ for each dataset. The authors present results when the EEG and MMD loss weights are set to zero (Table 3), but lacks comprehensive experimental data on how varying these weights might affect the model's performance for each dataset. Could you provide additional experimental results illustrating the impact of increasing these weights? This would help in understanding the robustness and sensitivity of the model to these specific hyperparameters.\n5. **Comparison of MMD Loss and Concept Whitening (from [2]):** Given that Concept Whitening aims to make concepts nearly orthogonal, how does MMD loss compare in terms of effectiveness and efficiency in achieving orthogonality among the learned concepts compared with [2]?\n6. **Handling Uncertainty in CheXpert Dataset (Section 4):** How does the model address the uncertainty attributes present in the CheXpert dataset? Are there specific techniques or modifications employed that enhance the model's reliability and accuracy in this context? Given that the CUB dataset also contains uncertainty attributes, can you explain why these were not utilized in your experiments?\n7. **Lack of Detail on Coop-CBM (Table 1):** Coop-CBM is mentioned without sufficient introduction or referencing. Could you provide a detailed description and relevant citations to clarify its role and significance to use it as your baseline?\n8. **Overall Concept Accuracy Not Calculated in [3]:** The manuscript does not report overall concept accuracy, which could be crucial for assessing the holistic performance of concept predictions across a dataset. Why was this metric omitted, and can it be included to provide a more comprehensive evaluation of the model's interpretability?\n9. **Interventions on Bottleneck Concepts (Figure 4):** When discussing interventions in the bottleneck, are these applied to groups of concepts or individual concepts? Clarifying this could help understand the granularity and specific impact of interventions on the model's output.\n10. **Source of Concepts and Comparison with Similar Methods (Figure 6):** From which dataset were the five concepts selected for analysis in Figure 6? Can you provide a comparative analysis using tsne visualizations against similar methodologies, such as those in [2] and [4] Figure 5, to highlight the distinctions or improvements offered by your approach?\n\n[1] Zarlenga, Mateo Espinosa, et al. \"Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off.\" NeurIPS, 2022.\n\n[2] Chen, Zhi, et al. \"Concept Whitening for Interpretable Image Recognition\", Nature Machine Intelligence, 2020.\n\n[3] Xu et al. \"Energy-based concept bottleneck models.\" ICLR, 2024.\n\n[4] Kim, Sangwon et al. \"EQ-CBM: A Probabilistic Concept Bottleneck with Energy-based Models and Quantized Vectors.\" ACCV, 2024."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1. The dual-branch architecture is innovative; one branch focuses on concept extraction while the other calculates concept probability, enhancing the model's interpretability.\n2. Incorporation of Maximum Mean Discrepancy (MMD) loss to ensure the concepts learned are orthogonal, which is beneficial for model robustness and interpretability.\n3. Demonstrated robustness against datasets with significant background variability, which is crucial for practical applications."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces an Energy Ensemble CBM (EE-CBM) architecture that integrates energy and concept probability through an Energy Ensemble Gate (EEG). This model aims to balance task accuracy with interpretability, address information bottleneck issues in CBMs, and enhance the distinctiveness of concepts through the use of MMD loss. The proposed approach is promising but requires clearer exposition and stronger empirical validation to substantiate its claims."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The mathematical notation, particularly in equations 14, 15, and 16, is poorly presented and leads to confusion. The pseudocode and overall technical exposition need significant improvement for clarity.\n2. The literature review in Section 2.1 lacks discussion on concept discrimination, despite relevant studies such as those presented in [1]. This omission is a critical gap, especially given past research on concept orthogonality.\n3. The discussion related to label-free approaches in Section 2.1 seems misplaced as it does not pertain directly to supervised Concept Bottleneck Models (CBMs), thus diluting the focus of the related work.\n4. Claims about improvements in model performance and quantifiable uncertainty by the concept probability branch in Section 3.1 are not substantiated with empirical evidence, contrasting with findings from related work like in [2] Figures 4, 5, and 6.\n5. The authors do not demonstrate a significant improvement over existing methods such as those in [3]. The functionalities described could be achieved with simpler architectures (e.g., x-c-y single branch) suggested by prior works, questioning the novelty of the proposed approach.\n6. There is a lack of experiments addressing the trade-off between accuracy and interpretability. The experimental design does not adequately highlight any distinctive advantages of the proposed CBM over conventional approaches.\n\n[1] Chen et al. \"Concept Whitening for Interpretable Image Recognition\", Nature Machine Intelligence, 2020.\n\n[2] Kim, Eunji, et al. \"Probabilistic Concept Bottleneck Models.\" ICML, 2023.\n\n[3] Xu et al. \"Energy-based concept bottleneck models.\" ICLR, 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "EE-CBMs address the trade-off between task accuracy and interpretability in concept bottleneck models by using energy attention-based concept encoding, an energy ensemble gate, and MMD loss."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024balancing,\ntitle={Balancing Interpretability and Accuracy: Energy-Ensemble Concept Bottleneck Models for Enhanced Concept Inference},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=42TXboDg3c},\nnote={under review}\n}"
},
"abstract": {
"value": "Concept bottleneck models (CBM) have emerged as a promising solution to address the lack of interpretability in deep learning models. However, recent researches on CBM prioritize task accuracy at the expense of interpretability, weakening their ability to accurately infer key concepts. This work addresses this trade-off by introducing the energy ensemble CBM (EE-CBM). The EE-CBM leverages an energy-based concept encoder to effectively extract concepts, overcoming the information bottleneck common in conventional CBMs. Additionally, a novel energy ensemble gate within the EE-CBM architecture efficiently combines energy and concept probability to further address this bottleneck. Moreover, the EE-CBM employs the maximum mean discrepancy loss to enhance concept discrimination within the concept space and facilitate accurate concept inference. An experimental evaluation on benchmark datasets (CUB-200-2011, TravelingBirds, AwA2, CheXpert, and CelebA) demonstrates that EE-CBM achieve state-of-the-art performance in both concept accuracy and interpretability. This work positions the EE-CBM as a significant advancement in CBM researches, enabling them to effectively balance performance and interpretability for improved model transparency. Our code is available at https://anonymous.4open.science/r/EE-CBM-F48D."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Energy-Based Models",
"Concept-Based Models",
"Explainable AI"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/fa4a2419b13a670ecc5a925c879ff51e25173d49.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Balancing Interpretability and Accuracy: Energy-Ensemble Concept Bottleneck Models for Enhanced Concept Inference"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
43Ckmku1fC | Towards Stabilizable Sequential Smoothing Spline Interpolation by Point Forecasting | main | Active | spline interpolation;sequential decision making;stability;controllability;time series forecasting | learning on time series and dynamical systems | 3;5;6;8 | 3;3;4;4 | 1;2;3;4 | 1;2;3;3 | 2;2;3;3 | 5.5 | 3.5 | 2.5 | 2.25 | 2.5 | 0.83205 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. If these hyperparameters are assumed given for the scope of this paper, how did you set them? Since these tune the tradeoffs between roughness and fit quality, it seems like these are something very important to determine stability. Perhaps there is a way to adaptively update these parameters to determine the forecasts?\n2. I understand the contribution is to use dynamical systems theory to analyze the instabilities in the low-latency regime, but how does this compare to adaptive (for the parameters) methods such as Bayesian p-splines?\n3. The state dynamics are then Markovian in nature? That seems like it should be mentioned if it is, please correct me if I'm wrong in assuming. \n4. Can you explain briefly why you are considering a linear dynamics model (i.e. since the actions a_t are given by a dynamical system featuring matrices that capture certain relationships). Is this sufficient for the model's expressive power, or are you simply leveraging the well-established linear control theory as a start?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper is a good first work in exploring the problem of stabilization for smooth spline interpolations through the lens of dynamical systems theory. \n2. The effect of the delay of the sequence of data points on the action sequence (modeled as a limited lookahead) is clearly motivated and validated through the theory and experimental results. \n3. Overall, an interesting connection between two well-established fields of spline interpolation and dynamical systems theory with good applications in forecasting. \n4. The sequential nature of the splines is a practical set-up for large-scale data throughput (i.e. memory constraints)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors use dynamical systems theory to understand the problem of stabilizing smooth spline interpolations in low-delay situations. This work formalizes the internal instability and asserts the controllability of sequential smoothing spline interpolators. They provide a stabilizing strategy based on data point forecasting capable of operating even under delay-less regimes and without sacrificing any smoothness of the interpolated trajectory"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The important hyperparameters that are paramount to determining roughness and solution trade-off contribute to solution stability, so I am not sure why these are being fixed instead of learnt adaptively. This approach seems a bit limited in that these are not varying coefficient models.\n2. The authors addressed this already, but their Conjecture in that the system is controllable for any \\rho needs more theoretical foundation. This seems to be the crux but needs much more foundation. \n3. More discussion could be had on the offline setting (where the parameters are actually being trained) but this is not that big a deal for the context of this paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Here are some minor questions and comments. They do not affect my score because they are far less important than the points made above and because I think they should be easy to resolve. Still, I think resolving them might improve the manuscript.\n\n- What's the setup for Figure 1?\n- There are inconsistent linebreak formats (e.g. line 062/063 versus line 071/072). Is this on purpose?\n- Line 077: What do \"overlapping\" and \"distort\" mean in this context?\n- Line 081: What does \"this representation choice\" refer to? (This question might appear petty, but this sentence seems critical for the problem setting).\n- Line 307/308: Where are the results for this statement?\n- The problem is likely a lack of knowledge on my side, but I struggle to connect Section 2.2 (\"Dynamic programming approach\") to \n\n > Bellman, R., B. G. Kashef, and R. Vasudevan. \"Splines via dynamic programming.\" Journal of Mathematical Analysis and Applications 38.2 (1972): 471-479.\n\n It might also be reasonable to expect a discussion of Bellman et al. in the paper, which I couldn't find. If the authors agree with me, it would be great to see it added to the paper. \n\n- The parametrisations of A in line 216 versus Equation 12 do not match. Line 216 suggests that A has zeros on the diagonal and constant factors (independent of u) on the first upper diagonal."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1. The introduction is nicely written and easy to follow.\n2. The experiments are convincing, especially those in Appendix C.\n3. A paper that studies the interface of splines, dynamic programming, and state-space models should provide relevant insights to the broader machine learning community (even if the technical results are heavy on control theory and signal processing, which not every machine learner might be familiar with)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The submission discusses the stability of sequential implementations of spline smoothing.\nThe main contributions are:\n\n- Identifying a time-varying linear system for the smoothing spline coefficients through dynamic programming \n- Analysing the stability and controllability of this system to study the stability of the spline interpolation problem\n- Stabilising spline interpolation through delay and forecasting\n\nThe experiments suggest that the stabilisation strategies improve the stability of spline interpolation notably."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Unfortunately, I recommend rejecting the submission despite the strengths outlined above. The reason is that the stability analysis raises questions which I doubt can be resolved without major revisions.\nConcretely, I identify the following weaknesses.\n\n### 1. Result 1 seems incorrect\n\n- Equation 16 in the proof of Result 1 needs to be explained more thoroughly. I need more instructions to verify that Equation 16 is the correct determinant. Further, the statement that det(M) can't be zero for any u_t > 0 is incorrect: take $u_2=u_3=1$ and $u_4=4$, then Equation 16 is zero. \n- I don't think the system in Equation 12 is controllable. For example, take $\\rho=2$ and $u_t=1$ for all $t$ so that time-invariant theory applies. Then, A has left-eigenvector $(0, 0, 1, 3)$ with eigenvalue 2, and this eigenvector has a nonzero inner product with $B$. Thus, the setup contradicts the condition in Appendix C.6.3 in Anderson and Moore (1979). It also contradicts conditions C.5.3 (reachability) and C.7.3 (stabilisability). By padding with zeros and replacing 1 and 3 appropriately, the same case can be made for $\\rho > 2$.\n- Conjecture 1 is claimed to be supported by simulation, but the simulation results are not in the paper.\n\n### 2. The linear-system perspective needs more clarity\n\n- The linear system in Equation 9 (with parameters in Equation 12) is a central contribution of the paper according to the \"Contribution\" paragraph on page 2. As a reviewer, I need more instructions for deriving Equation 9 from Equation 7 than in line 201. As is, I can't verify or falsify Equation 9, which is problematic because Result 1 builds on Equation 12, and Result 1 contains mistakes (see previous point).\n- The terminologies of controllability, reachability, and stabilisability need to be more clearly distinguished. Section 3 uses \"controllable\", but Result 1 shows \"reachability\", and Section 4 interprets Result 1 as having shown stabilisability. For context, I use the terminology from Appendices C.5, C.6, and C.7 in Anderson and Moore's \"Optimal filtering\" book (1979). \n\n\n\n### 3. The manuscript lacks a discussion of spline smoothing and linear systems via stochastic processes\n\nAnother connection between linear systems and smoothing splines is known (via stochastic processes, not via dynamic programming): \n\n> Kohn, Robert, and Craig F. Ansley. \"A new algorithm for spline smoothing based on smoothing a stochastic process.\" SIAM Journal on Scientific and Statistical Computing 8.1 (1987): 33-48.\n\n> Wahba, Grace. \"Improper priors, spline smoothing and the problem of guarding against model errors in regression.\" Journal of the Royal Statistical Society Series B: Statistical Methodology 40.3 (1978): 364-372.\n\nWahba relates the smoothing spline to the repeatedly-integrated Wiener process. Concrete expressions for the time-discretisations of integrated Wiener processes (compare these to Equation 12 in the submission) are in Section 5.4 of: \n\n> Hennig, Philipp, Michael A. Osborne, and Mark Girolami. \"Probabilistic numerics and uncertainty in computations.\" Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 471.2179 (2015): 20150142.\n\nBoth Wahba (1978) as well as Kohn and Ansley (1987) need to be discussed more prominently. I mention Hennig et al. (2015) because Hennig et al.'s Section 5.4 eases comparing Equation 12 in the main paper to Wahba's work."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. It seems that the performance of the adopted forecasting method is critical to the performance of the proposed scheme. In fact, the such step-ahead estimation schemes are often described by state space forms, which come with own stability considerations. How is this accounted for?\n\n2. In the same vein, could you discuss potential limitations associated with forecasting errors in your method? How do you handle scenarios where the forecasting model is inaccurate, and could forecast errors potentially destabilize the interpolation?\n\n3. It seems that certain hyper-parameters are involved in the proposed setting. How are these configured. Is a sensitivity analysis necessary to ensure adequate performance?\n\n4. Could you provide more details on the computational overhead introduced by forecasting at each step? Specifically, how does the computational load vary with forecasting model complexity, and what measures are taken to ensure real-time applicability?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The treatment of the interpolation problem as a discrete dynamic estimation problem, allowing for application of forecasting, dynamic programming and application of formal stability and controllability metrics is a particular strength of this work.\n\nThe paper demonstrates originality by addressing a specific limitation in sequential smoothing spline interpolation, particularly under low-delay constraints where traditional methods fall short. A main novelty stems from the innovative use of data forecasting as a stabilization mechanism. Rather than requiring a delay or compromise on smoothness, as is common in existing methods, this approach leverages forecasting models to predict future data points, effectively simulating a delay without waiting. The paper rigorously evaluates the stabilization strategy in both uniformly and non-uniformly sampled data environments.\n\nThe authors provide theoretical foundation by formally proving the instability of sequential smoothing splines under low-delay conditions and presenting a forecasting-based stabilization strategy. The work is clearly written and significant in terms of potential applications."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper addresses the instability in sequential smoothing spline interpolation, especially in low-delay scenarios where typical solutions sacrifice either delay or smoothness. Existing smoothing spline methods often depend on delaying data processing to stabilize trajectories, which is infeasible in real-time applications. This work introduces a novel stabilization approach through data forecasting, allowing low-delay operation without compromising smoothness. By formalizing the instability in sequential smoothing splines and establishing the controllability of these models, it fills the research gap in stabilizing real-time smoothing spline interpolation, especially in delay-sensitive contexts where both stability and smoothness are crucial.\n\nThe authors model the trajectory of a smoothing spline interpolator as a discrete dynamical system of spline coefficients, analyzing its internal instability and controllability. The primary strategy proposed for stabilization employs data point forecasting to predict future data points, simulating the effect of delayed data without waiting. The method leverages a dynamic programming approach to set up an action-update mechanism, with the instability and controllability of this mechanism analyzed through control theory. Two forecasting methods are explored: a simple zero-order hold model and a parametric linear model with optional online or offline learning. The strategy ensures stability by enabling forecasts that approximate the smooth behavior seen in delayed responses without actual delays, catering to low-delay regimes in sequential data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. While the work shares a valuable perspective, which - in the opinion of this reviewer - moves beyond previous attempts, certain weaknesses are identified. \nFirstly, the original work where the interpolation problem is cast as one of dynamic programming is the one by Bellmann, Kashef and Vasudevan (1972), which is not cited or discussed in this work. The links to that original work and how this work moves forward need to be clarified. \nSimilarly, works on zero delay interpolation using alternate trainable strategies exist and some mention of this is warranted, e.g., Ruiz-Moreno, Lopez-Ramos, Beferull-Lozano (2023).\n\n2. While the paper introduces forecasting as a core strategy for stabilization, the exploration of forecasting models is somewhat limited. This is in part justified, since the main devised experiments are generated by linear AR(2) models, which allows for use of a basic zero-order hold model or a simple linear model. However, at the same time, this likely means that a training of an AR forecasting model could suffice for the task at hand. Given the simple case studies, the work not delve into exploration of more sophisticated forecasting techniques. It is of course appreciated that section C4 is added to tackle this consideration, however, it seems like this work would benefit from inclusion of more such complex processes.\n\n3. The paper would benefit from more explicit comparisons to contemporary interpolation approaches that also aim for zero delay. While the method’s novelty is highlighted against traditional delay-based techniques, the work does not sufficiently benchmark its performance against other recent zero-delay interpolation methods, such as the trainable real-time interpolators (RTIs) discussed in the referenced work from 2022. Such comparisons could validate the claimed advantage of this approach in real-world applications.\n\n4. Given the relevance of this work for interpolation of real-world datasets, it is surprising that no such datasets are actually employed. For instance, the aforementioned work of Ruiz-Moreno, Lopez-Ramos, Beferull-Lozano (2023) employs one synthetic dataset and five\nreal datasets. It seems that such real datasets can also be considered herein."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Here are some other questions and comments, the major ones are in the Weakness section:\n\n1. There are a few potential typos and off-by-one things in the text. For example, when defining matrix $A$ in equation (12), $A_{1,1}$ by the formula should be 0, not 1, and in (12) the matrix $A$ seems to be of shape $(2\\rho+1) \\times 2\\rho$ following the ddots. The authors can carefully review them.\n\n2. Maybe this is another formulation, but for the optimization problem (1), I usually see people using $D^{(\\rho-1)}$ as the penalty term. Say for cubic splines, the penalty is the second derivative, like in (Hastie et al, The Elements of Statistical Learning, section 5.4).\n\n3. It is not clear to me how do you solve the DP (5)-(8). Usually we propagate it backward in time, but seems (5)-(8) is forward in time. How do you evaluate J in (5) and optimize for $a_1^*$?\n\n4. In section 4.2.1, how do you define $\\mu_0$? Can you give some justification of your $\\mu_t$?\n\n5. Why is controllability important to know? What does it imply?\n\n6. Does being stable necessarily imply not being controllable, and being controllable imply instable?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "This paper studies an important question of learning a model from streaming data. This topic becomes more and more important in this big-data era. For the specific model considered in this paper -- the smoothing spline for streaming data -- the author(s) proposed a novel way of estimation, which is empirically stable and does not require waiting for future data like previous methods. This is demonstrated through an experiment in the paper. The authors also formulated the stability of the fitted spline as a dynamical system problem and provided some analysis on this aspect. This provides a novel approach to deepen the understanding of the instability issue."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "I thank the authors for contributing this study to the community. In this paper, the author(s) proposed a new way of learning smoothing splines for a streaming dataset. The naive approach for this simply updates the smoothing spline using the just-observed data, and it is known to be potentially instable. The newly proposed approach predicts future observations using a parametric model trained on the past data, and then uses the available data and predicted data together to update the smoothing spline for the next time stamp.\n\nThe paper proved theories about stability of the underlying dynamics of spline coefficients and controllability of this dynamics. Besides the theory, a numerical test on a synthetic data is provided to justify that the proposed approach is more stable compared to the naive approach."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper has a few technical and presentation issues. I will list the major ones below. I begin with technical issues. Below is a list of technical problems I think the author(s) should address in order for the paper to be valid/more readable.\n\n1. In the proof of theorem 1, the authors seem to confuse the operator norm and spectral radius of matrices. It is not sufficient to conclude $\\\\|Ax\\\\| \\leq c \\\\|x\\\\|$ from that the eigenvalues of A are smaller than c, even if A is positive upper-triangular. For example\n\n\\\\begin{equation}A = \\\\begin{pmatrix} 1 & M\\\\\\\\ 0 & 1\\\\end{pmatrix}\\\\end{equation}\n\nwith some $M \\gg 1$. Now take $x = (0, 1)^\\top$ then you should observe $\\\\|A^k x\\\\|$ grows to infinity (in this case linearly in $k$). \n\n2. The dynamics of the spline update follows equation (9), which is not the internally stable case studied in Theorem 1. The authors did not provide the connection between these two. What does internally stable/instable mean for the spline update? It is clear that internally instable implies (9) is instable if one can freely choose $\\alpha_t^* = 0$, but I am not sure this is true from (10a) and (10b). Conversely, for $\\alpha_t^*$ given as in (10a-b), can we say the \"external\" dynamics is necessarily instable? These important connections should be discussed if they are not trivial.\n\n3. In section 3.2 the authors called $\\alpha_t$ \"inputs\". I feel it is better if the authors can establish controllability using $o_t$ as inputs, showing that the spline segments can become any polynomial you want using appropriate data $(x_t, y_t)$. If $\\alpha_t$ is used as inputs, then more explanation on what it means to \"input\" $\\alpha_t$ to the system is helpful.\n\n4. The methodology (19) is not clearly a valid choice. As the authors pointed out, the time series may not come in in a uniform time grid, and $u_t$ can change in $t$. Given this setup, I do not believe an AR-like model (19) can fit the data well. Suppose my data model is $y(t) = t$, and I observe $y_{t_1} = 1,~y_{t_2} = 2,\\ldots, ~y_{t_k} = 2^{k-1}$ with time stamps $t_j = 2^{j-1}$. Then your learned $\\\\Theta$ would simply double the last observation by 2. But suppose now my $t_{k+1}$ is no longer $2^k$, then the prediction is completely wrong. So some justification of the model would be nice.\n\n5. In view of 4 above, I don't think the numerical experiment is sufficient. First of all, the experiment uses an AR model to generate the data, for which one expects the method in (19) works fine. It is nice to see (19) works for some other models or for a real dataset. Second, for the AR data, I wonder if the windowing strategies mentioned in the paper can potentially work well, too. It would be nice if we can see comparisons between the new method and state-of-the-arts. Finally, the data is generated on a uniform time grid, and it is nice to see the performance of (19) on a non-uniform time grid.\n\nNext I comment on some non-technical issues. The idea behind the work is nice, but the presentation can be better.\n\n1. I understand these papers are quite compact, and some details cannot be elaborated. However, the main text should provide a smooth introduction to those not interested in the technical details and those new to the field. However, I found it occasionally hard to grasp the ideas looking at the main text, and I ended up looking at the appendix to confirm the \"conditional expectation\" of my interpretation of what the authors mean in the main text (e.g. the delay mechanism, the definitions in the first paragraph of section 2.2). To summarize, I don't think the authors did a perfect job presenting easy-to-understand overviews in the main text. To address this, I suggest the authors try to re-think the presentation in the main text, add in some formulas, and remove some text descriptions and pictures to save space (I like those, but in this case, they appear to be insufficient to convey the idea).\n\n2. Some key parts of the paper is missing, making the logic of the paper less clear. (1) after you predict H steps into the future, how do your algorithm find the next segment? By doing a spline fit to the entire data? By solving the DP you constructed? How do you solve the DP if this is the case? A simple algorithm environment or even some description could be more concise than Figure 2. (2) How does the proposed approach improve stability? Theorem 1 says cubic splines are not internally stable, does your method alleviate this issue?"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "To the best of our knowledge, we propose the first strategy for stabilizing sequential smoothing spline interpolators under (possibly) delayless regimes and without sacrificing any smoothness of the interpolated trajectory."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024towards,\ntitle={Towards Stabilizable Sequential Smoothing Spline Interpolation by Point Forecasting},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=43Ckmku1fC},\nnote={under review}\n}"
},
"abstract": {
"value": "Sequential smoothing spline interpolators exhibit unstable behavior under low-delay response requirements.\nThat is, instability issues are observed when a smoothing spline interpolator is forced to provide an interpolated trajectory piece subject to processing only a few to no incoming data points at each time stamp.\nTypically, the above instability setback is solved by increasing the delay, sacrificing some degree of smoothness in the interpolated trajectory, or a combination of both. \nHowever, stable sequential smoothing spline interpolation strategies working under low delay and without compromising their degree of smoothness seem vastly unexplored in the literature.\nTo the best of our knowledge, this work formalizes the internal instability and asserts the controllability of sequential smoothing spline interpolators for the first time.\nSpecifically, we model the trajectory assembled by a smoothing spline interpolator as a discrete dynamical system of the spline coefficients, facilitating the analysis of its internal instability and controllability.\nFrom these results, we propose a stabilizing strategy based on data point forecasting capable of operating even under delayless regimes and without sacrificing any smoothness of the interpolated trajectory.\nOur claims are theoretically confirmed, or experimentally supported by extensive numerical results otherwise."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"spline interpolation",
"sequential decision making",
"stability",
"controllability",
"time series forecasting"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/69d26404f5b879334ceb22de4f2ed742cb46f81a.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on time series and dynamical systems"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/7e9f71cffcca797fe11d2b11a434c9088bebb09f.zip"
},
"title": {
"value": "Towards Stabilizable Sequential Smoothing Spline Interpolation by Point Forecasting"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
44CoQe6VCq | Test of Time: A Benchmark for Evaluating LLMs on Temporal Reasoning | main | Active | Temporal Reasoning;Temporal Graphs;LLMs | datasets and benchmarks | 6;6;6;8 | 3;4;4;4 | 3;3;3;3 | 3;3;2;3 | 2;3;3;3 | 6.5 | 3.75 | 3 | 2.75 | 2.75 | 0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "As mentioned in weakness."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- For the ToT-Semantic dataset, designed to evaluate LLMs on temporal semantics and logic, the authors employ seven graph generation algorithms and develop eight manually crafted question types. This diversity allows the generation of a large volume of synthetic questions, adding rigor to the dataset and covering various temporal reasoning facets.\n\n- The study provides detailed insights into the temporal reasoning capabilities of frontier LLMs, including how factors such as graph size, question type, and temporal fact ordering influence performance. These observations offer valuable understanding into both the strengths and limitations of current LLMs in temporal reasoning."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces two datasets specifically crafted to evaluate large language models (LLMs) on temporal reasoning across diverse scenarios. The authors argue that existing benchmarks for temporal reasoning primarily use question-answering tasks based on Knowledge Graph -style temporal facts about well-known entities. Such benchmarks may reflect a model’s capacity to leverage prior knowledge rather than assess true temporal reasoning skills. To this end. the proposed datasets aim to measure two core temporal reasoning abilities of LLMs: (1) understanding the semantics and logic of time, and (2) performing temporal arithmetic."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- While ToT-Semantic focuses on temporal semantics and logical reasoning, the paper does not clearly explain how the graph generation process ensures the correctness of graph evolution. Specifically, the distinction between generating static graphs and those with temporal dynamics is not addressed, leaving questions about the dataset's fidelity to real-world temporal processes. \n\n- In introduction, the paper emphasizes the importance of evaluating LLMs on temporal reasoning but does not clearly explain why a graph structure is essential for this assessment. Could the authors elaborate on the necessity of graphs in this context?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.\tHow would the performance of LLMs change if the benchmark included static facts in addition to explicit temporal facts?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "-\tThe proposed ToT benchmark is designed to address the limitations of existing benchmarks by encompassing a wider variety of graph structures and question types, enabling a more nuanced evaluation of LLMs' temporal reasoning abilities\n-\tThe authors offer an evaluation of temporal reasoning by decoupling it into semantic and arithmetic aspects. This two-pronged approach provides a more detailed analysis of LLM capabilities."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on evaluating the temporal reasoning abilities of large language models (LLMs). The authors introduce a new synthetic dataset, Test of Time (ToT), which consists of two tasks: ToT-Semantic for temporal semantics and logic, and ToT-Arithmetic for temporal calculations. The study evaluates five LLMs and analyzes the impact of factors like graph structure, question type, and fact order on performance. The findings provide insights into LLMs' strengths and weaknesses in temporal reasoning."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "-\tAs mentioned in the limitation section, the benchmark focuses on scenarios where both the start and end times of a fact are mentioned within a single sentence. But real-world temporal information can be spread across multiple sentences or documents.\n-\tThe authors generate questions using templates, which might not fully capture the complexity and variability of natural language found in real-world temporal reasoning tasks."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Some details are missing.\n - Line 212: “we generated questions per graph generation and per question type”: Please explain how to generate such questions. Are they generated from templates, manual annotations, or LLMs?\n - Line 369: Is it because the superior performance on longer contexts? Is there a correlation between long-context performance (or overall task performance e.g., MMLU, GSM8K, MATH500) and the final temporal reasoning performance? Are there sufficient test cases with more edges for providing robust evaluation?\n- Typos:\n - Line 275: Funcionalizing → Functionalizing"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The data synthesis process benefits from the graph-guided control, and could be generalized to many other tasks.\n- The constructed data are comprehensive and include many perspectives with quality control.\n- Experiments are extensively conducted on multiple aspects, and provide some insights on future directions."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Dealing with the dataset quality and potential leakage problems, this paper introduces a novel method to synthesize a benchmark for comprehensive temporal reasoning benchmarks. The benchmark contains semantic and arithmetic questions with fine-grained topology control. Extensive experiments are conducted and show insightful conclusions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Some claims lack of quantitative evidence:\n - “real-world data that LLMs may have encountered during pre-training or employ anonymization techniques that can inadvertently introduce factual inconsistencies” Could you add some quantitative evidence showing the GPT-4 or Gemini-1.5 Pro baselines have pre-training data contaminations?\n - “LLMs could even potentially guess the original entities due to their adjacent relations” This also lacks of quantitative evidence. If this is a commonsense, there should be relevant references cited.\n- The literature review is not sufficient, and there are many researches on math-related temporal reasoning tasks. There lacks of relevant references in the introduction and the related work.\n - Wang, Y., & Zhao, Y. (2023). Tram: Benchmarking temporal reasoning for large language models. *arXiv preprint arXiv:2310.00835*.\n - Chu, Z., Chen, J., Chen, Q., Yu, W., Wang, H., Liu, M., & Qin, B. (2023). Timebench: A comprehensive evaluation of temporal reasoning abilities in large language models. *arXiv preprint arXiv:2311.17667*.\n - Su, Z., Zhang, J., Zhu, T., Qu, X., Li, J., Zhang, M., & Cheng, Y. (2024). Timo: Towards Better Temporal Reasoning for Language Models. *arXiv preprint arXiv:2406.14192*."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Have the authors considered how the format of the date/time, such as words versus numerical format, might influence the model’s performance?\n\n2. For 4.1, 4.1.1, what task does the author evaluate?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The method works on temporal reasoning with LLM, an important area of research that contributes to understanding the model's overall complex reasoning capabilities.\n\nThe authors conduct several experiments. Their analysis and the data offer valuable insights for future research."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this work, the authors introduce two novel synthetic datasets, TOT-Semantic and TOT-Arithmetic, specifically designed to evaluate LLMs’ temporal reasoning abilities with graph-like facts from two perspectives: (1) understanding the semantics and logic of time, and (2) performing accurate temporal arithmetic. The authors also conduct extensive experiments to examine how LLM performance is influenced by the graph structure, graph size, question type, and fact ordering of the problem."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper lacks detail on dataset construction. For instance, how are the final questions generated in both TOT datasets? Are templates being used? (see also question 1)\n\nThe number of baselines is limited. Additional approaches could include directly generating code for TOT-Arithmetic or applying few-shot or self-consistency."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024test,\ntitle={Test of Time: A Benchmark for Evaluating {LLM}s on Temporal Reasoning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=44CoQe6VCq},\nnote={under review}\n}"
},
"abstract": {
"value": "Large language models (LLMs) have showcased remarkable reasoning capabilities, yet they remain susceptible to errors, particularly in temporal reasoning tasks involving complex temporal logic. Existing research has explored LLM performance on temporal reasoning using diverse datasets and benchmarks. However, these studies often rely on real-world data that LLMs may have encountered during pre-training or employ anonymization techniques that can inadvertently introduce factual inconsistencies. In this work, we address these limitations by introducing novel synthetic datasets specifically designed to assess LLM temporal reasoning abilities in various scenarios. The diversity of question types across these datasets enables systematic investigation into the impact of the problem structure, size, question type, fact order, and other factors on LLM performance. Our findings provide valuable insights into the strengths and weaknesses of current LLMs in temporal reasoning tasks. To foster further research in this area, we will open-source the datasets and evaluation framework used in our experiments."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Temporal Reasoning",
"Temporal Graphs",
"LLMs"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/ac2cbe1ab8f6f2d6ce50a2daa9df51dd46ae9dc2.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/52c7bd2b9f6ade089608747b47027aa3bbfe3928.zip"
},
"title": {
"value": "Test of Time: A Benchmark for Evaluating LLMs on Temporal Reasoning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
44IKUSdbUD | Weighted Diversified Sampling for Efficient Data-Driven Single-Cell Gene-Gene Interaction Discovery | main | Active | Gene-gene interaction;sampling | other topics in machine learning (i.e., none of the above) | 1;3;5 | 4;2;4 | 2;1;3 | 1;1;2 | 2;1;3 | 3 | 3.333333 | 2 | 1.333333 | 2 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see the weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The proposed method significantly reduces computational requirements without sacrificing accuracy. This approach addresses a key challenge in handling large-scale single-cell transcriptomic data.\n- By leveraging Transformer models (CelluFormer) for gene-gene interaction discovery, the paper effectively adapts state-of-the-art NLP techniques to bioinformatics.\n- The extensive experimental validation across multiple datasets and comparison with various baselines (e.g., Pearson Correlation, Spearman’s Correlation) provides empirical support for the proposed method’s effectiveness and robustness in data efficiency."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel computational framework designed to discover gene-gene interactions linked to complex diseases through single-cell transcriptomic data. Utilizing a Transformer model named CelluFormer, the authors address the challenge of computational efficiency by implementing a weighted diversified sampling algorithm. This algorithm allows the selection of a representative data subset by calculating a diversity score based on the Min-Max density kernel."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper does not thoroughly discuss potential limitations or biases in using Transformer attention maps for gene-gene interaction discovery, such as how model-specific patterns may impact biological interpretability or generalizability.\n- While the diversity score and sampling algorithm are well-motivated, the paper lacks detailed explanations regarding parameter sensitivity (e.g., choice of sample size) and the scalability of the method to even larger datasets or different cell types beyond the focus on Alzheimer’s Disease data.\n- The paper could benefit from a more comprehensive comparison with existing gene interaction discovery techniques, especially non-Transformer-based methods that might offer complementary insights or efficiency advantages.\n- For the scGPT model, there are cases where it performs better than the proposed method on specific datasets. Therefore, simply attributing the foundation model's lower performance to overfitting to pretrained knowledge or a mismatch between pretraining and fine-tuning data seems insufficient to support the claim that it underperforms compared to the proposed method."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Refer to the weaknesses for the questions that need to be addressed. \nMore general ones:\n- What are the related methods for this problem?\n- What is the model proposed here?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "N/A"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper propose a method that is related to transformer architecture to do gene-gene interaction discovery in single cell data. The key idea of the method is a density-based sampling method to reduce the data size."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper has major problems that prevent me from understanding the method itself and its relations to related methods such as transformer and previous work on the same problem.\n- Problem setting is not clear: It is stated that data set X given, so what is a disease D is in the dataset? What need to be learnt? How many dimensions are there in X, |V| or m? \n- Model is not clear: what is \"f\"? The proposed model is not described anywhere in the text. Where is the interaction map in the model? What does it mean by \"f\" can successfully predict a disease infection? What does it mean by a gene pair that contribute the most to \"f\"?\n- I think the paper simply refers the model to transformer architecture but the data here vectors!?!\n- Conditions 2.1 on permutation invariant is totally misleading. \"f\" is defined for vectors, should we permute vector elements? \n- While the data is said to be sparse, there are up to ... 12000 expressed genes in a cell, most of them have 2000+ expressed genes?\n- The definition of Min-Max density is clumsy since it can refer to its root: kernel density estimation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Why develop a new transformer model if the goal is just to identify gene-gene interactions? It seems like it would be easier to work with some existing model.\n\nHow does the proposed Transformer model differ from scGPT, GeneFormer, and scFoundation? On line 124 we are told that they are similar to CelluFormer, but I don't see any indication of how they differ.\n\nHow do the \"scatter addition operations\" described in line 231 work? This entire paragraph would benefit from a more formal treatment, since I found the textual description very hard to follow.\n\nOn the face of it, the fact that you can achieve the same performance from 1% of the data as from 100% can have two interpretations: you made fanastic use of the 1% to achieve performance comparable to 100%, or you made terrible use of the 100% and only achieved performance as good as using 1%. How do we know which is the case here?\n\nHow are the cell type labels in Table 1 derived? More detail needs to be given to make this experiment reproducible."
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The problem of detecting gene-gene interactions directly from scRNA-seq data is important."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper aims to use transformer model trained from scRNA-seq data to identify gene-gene interactions. The approach involves combining the attention matrices over all the layers of the transformer and then evaluating whether the resulting aggregated attention values give higher weights to known pairs of interacting genes. To make the approach computationally feasible, the authors also develop a sketching procedure to select a representation subset of cells."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The main idea here -- that you can infer gene-gene interactions by looking at the attention map in the transformer -- is pretty obvious.\n\nThe proposed sketching procedure is not compared to any existing methods.\n\nThe proposed transformer model is insufficiently described, and the paper doesn't say how it differs from existing models.\n\nOne of the major contributions here is a method for finding representative subsets of cells from scRNA-seq data (Section 3). Unfortunately, this problem is already fairly well studied, and the paper fails to cite any existing methods that tackle this problem (e.g., Hie et al., Cell Systems 2019; Yang et al., ACM-BCB 2020; Yi & Stanley, bioRxiv, 2023; Hao et al. Nat Biotech 2024). These methods should be employed and compared against.\n\nMore generally, there are many existing methods for detecting gene-gene interactions from scRNA-seq data. Prominent examples include SCENIC (Aibar Nature Methods 2017), GRNBoost2 (Moerman Bioinformatics 2019), PIDC (Chan Cell Systems 2017), SCODE (Matsumoto Bioinformatics 2017), SCRIBE (Moerman Nature Communications 2019). This large literature is not cited, and none of the methods therein is compared against.\n\nIn line 161, you say that you train a Transformer model to classify whether a cell is an \"Alzheimer's disease-infected cell or not.\" First, Alzheimer's is not an infection. But more importantly, there is no way to label individual cells as being affected by Alzheimer's or not. I am guessing that this sentence should say that you are labeling cells based on whether they come from an individual with Alzheimer's disease. How exactly this is done should be clarified.\n\nThe evaluation of the gene-gene interactions is problematic. The approach amounts to filtering a large set of known gene-gene interactions to include only those interactions that are implicated in Alzheimer's disease. This ignores the fact that many genes not involved in Alzheimer's disease continue to function and interaction with one another in the cell. It's not actually clear to me whether the evaluation considers pairs of genes not involved in Alzheimer's. It seems, from the description, like these pairs are treated as non-interacting."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024weighted,\ntitle={Weighted Diversified Sampling for Efficient Data-Driven Single-Cell Gene-Gene Interaction Discovery},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=44IKUSdbUD},\nnote={under review}\n}"
},
"abstract": {
"value": "Gene-gene interactions play a crucial role in the manifestation of complex human diseases. Uncovering significant gene-gene interactions is a challenging task. Here, we present an innovative approach utilizing data-driven computational tools, leveraging an advanced Transformer model, to unearth noteworthy gene-gene interactions. Despite the efficacy of Transformer models, their parameter intensity presents a bottleneck in data ingestion, hindering data efficiency. To mitigate this, we introduce a novel weighted diversified sampling algorithm. This algorithm computes the diversity score of each data sample in just two passes of the dataset, facilitating efficient subset generation for interaction discovery. Our extensive experimentation demonstrates that by sampling a mere 1% of the single-cell dataset, we achieve performance comparable to that of utilizing the entire dataset."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Gene-gene interaction",
"sampling"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a2657d591c77e49d44dcb156bff19622cab0971b.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Weighted Diversified Sampling for Efficient Data-Driven Single-Cell Gene-Gene Interaction Discovery"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
44WiKy8THW | Integrating Geodesic Interpolation and Flow Matching for Non-Autoregressive Text Generation in Logit Space | main | Active | Flow Matching;Non-autoregressive text generation | generative models | 1;3;5 | 2;3;3 | 1;3;2 | 1;2;2 | 1;1;3 | 3 | 2.666667 | 2 | 1.666667 | 1.666667 | 0.866025 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Given the low quality of presentation, I have no further questions. I hope that the authors can make full preparations and improvements before the next submission."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1) Novel theoretical approach: The use of KL-divergence geodesics for flow matching in discrete sequence modeling is a novel concept. The theoretical justification provided for the likelihood function and its relation to the flow matching velocity adds to the rigor of the method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a novel approach for non-autoregressive text generation in logit space. It uses Kullback-Leibler (KL) divergence geodesics for flow matching between initial and target distributions of discrete sequences. A loss function is defined to maximize the conditional likelihood of discrete tokens, and its theoretical properties are explored. Despite initial poor results on the TinyStories dataset, an empirical sampling scheme based on a pretrained denoiser is proposed, which significantly improves performance. The method is also applied to image generation tasks for comparison."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) Extremely low writing quality: The writing and presentation of this article are extremely poor and unreasonable.\n\n2) Limited dataset evaluation: The evaluation is conducted on two uncommon datasets."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See the Weaknesses part"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper introduces a novel application of KL-divergence geodesics for text generation, addressing limitations in linear interpolation commonly encountered in discrete sequence modeling.\n\n- The use of a pretrained denoiser-based empirical sampling scheme demonstrates ingenuity, compensating for initial performance shortcomings and achieving improved generation results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a novel method for non-autoregressive text generation using KL-divergence geodesics and flow matching in logit space. The authors propose a conditional flow matching approach to address the challenges of discrete sequence modeling, demonstrating theoretical alignment between the loss function and flow matching velocity. To enhance performance, they implement an empirical sampling scheme based on a pretrained denoiser. Experiments on both text and image datasets show that the method outperforms traditional autoregressive models. Despite promising results, the sampling technique lacks full theoretical justification."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper seems to have been written in a hurry and lacks proper polish, with numerous missing references that make it difficult for me to follow. For example, references are missing at lines 32, 33, 39, 53, and 90, which disrupts the flow of the paper.\n- I find the experimental section quite limited, as it only includes a single experiment for both text and image generation. A detailed ablation study is missing, making it hard to understand the impact of different components.\n- I believe the evaluation metric for text generation is too restricted, relying almost exclusively on perplexity. While perplexity is useful for understanding how well the generated text fits the probable distribution, it can fail to capture semantic richness. I would recommend adding metrics like BLEU, ROUGE, or exploring newer evaluation methods for a more comprehensive assessment.\n- After reading the introduction, I still do not fully understand why flow matching is necessary for generation models. The motivation for choosing this specific approach remains unclear to me."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "N/A"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "N/A"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work presents a flow matching approach for generating discrete sequences. This approach treats discrete tokens as one-hot vectors and constructs a flow by interpolation on the logit space. Randomized top-k sampling is proposed for inference."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "This paper is only half-baked and needs substantial refinements before resubmission. For example, the presentation is poor (many variables are not explained, Figure 1/2 have the same caption, and some references are placeholders), experiments are only conducted on toy datasets (Tiny Stories, MNIST), and evaluation metrics are not sound (only use generative perplexity for language modeling)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024integrating,\ntitle={Integrating Geodesic Interpolation and Flow Matching for Non-Autoregressive Text Generation in Logit Space},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=44WiKy8THW},\nnote={under review}\n}"
},
"abstract": {
"value": "Non-autoregressive language models are emerging as effective alternatives to autoregressive models in natural language processing, enabling simultaneous token generation. This study presents a novel flow matching approach using Kullback-Leibler (KL) divergence geodesics to interpolate between initial and target distributions for discrete sequences. We establish a loss function that maximizes the conditional likelihood of discrete tokens, demonstrating that its maximizer corresponds to the flow matching velocity under logit interpolation. While initial tests on the TinyStories dataset yielded unsatisfactory results, we introduce an empirical sampling scheme based on a pretrained denoiser, which significantly improves performance."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Flow Matching",
"Non-autoregressive text generation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/165278a7d107aa97cc706e1f207eb25ca1a21298.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Integrating Geodesic Interpolation and Flow Matching for Non-Autoregressive Text Generation in Logit Space"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
44cMlQSreK | On Quantizing Neural Representation for Variable-Rate Video Coding | main | Active | Variable Rate;Video Coding;Quantization;Neural Representation | applications to computer vision, audio, language, and other modalities | 5;6;6;6;8 | 4;4;3;4;4 | 3;3;3;3;3 | 2;3;3;3;3 | 3;3;3;3;3 | 6.2 | 3.8 | 3 | 2.8 | 3 | 0.102062 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How the figure 3 is generated. what is the architecture details of the INR network, and what kind of data is used for fitting. Does the analysis is also true for MLP?\n\n2. What is the significance of the equation 5 and 6, whether this optimization problem is solved in the paper. In the abstract, it was mentioned, PTQ was formulated as the mixed-precision quantization, it was not evident for me where the mixed precision quantization is solved. From the table 1, it seems like the mixed-precision quantization was not used. Also detail how the mixed-precision quantization is used.\n\n3. For Nagel et.al (2020) which formulation was used to compare? In Nagel et. al (2020) whether the equation (25) or equation (21) is used in their respective paper. It is important to specify, the equation (25) is closer to the loss (network calibration) in the proposed paper.\n \n4. In equation (16), how the $\\mathbf{s}$ is determined, is it learned with respect to the task loss, or is it optimized by the greedy search or is it fixed parameter. \n\n5. the network-wise calibration might also be applicable to the generalized neural network, did the authors have done any experiments on the generalized neural codec? \n\n6. For the quantization aware training approach, the weights initialization with post-training quantization will improve the convergence and reconstruction quality of the QAT. It would be nice to test this feature on some INR method which uses QAT."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1) Using one single model for different bit rates with post-training quantization is interesting. This alleviates the need to train a model for each bit-rate, this will decrease the training time.\n\n2) The paper provides the mathematical insights to their proposed method, inspired from the Nagel et. al (2020), and formulates the post-training quantization objective with respect to the network calibration. \n\n3) The experimental results show that the proposed method has a significant gain in the variable-rate coding."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors propose a post-training quantization method tailored to implicit neural representation (INR) based image and video compression. They argue that existing post-training quantization methods are not suitable for INR-based image and video codecs, and advance the existing PTQ for this specific task. Furthermore, the authors demonstrate how their proposed method can tackle variable rate coding with INR using a single INR model. They experimented with their method on top of existing INR methods and showed that their method performs better with minimal reconstruction loss."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The authors failed to compare their proposed approach with Neural Network Coding tool (NNC) [1] which also performs post-training quantization, and also can offer variable-bitrate coding by adjusting QP parameters. The authors should compare their method with NNC.\n\n\n [1] S. Wiedemann et al., \"DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks,\" in IEEE Journal of Selected \n Topics in Signal Processing, vol. 14, no. 4, pp. 700-714, May 2020\n https://arxiv.org/abs/1907.11900"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "How does NeuroQuant differ from traditional quantization methods in terms of bitrate adjustment?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The proposed method achieves variable-rate coding by adjusting QPs of pre-trained weights, eliminating the need for repeated model training for each target rate, which significantly reduces encoding time and complexity.\n\n2. The method demonstrates superior performance in compression efficiency, outperforming competitors and enabling quantization down to INT2 without notable performance degradation.\n\n3. The paper proposes a unified formula for representation-oriented PTQ calibration, streamlining the process and improving its applicability across different models.\n\n4. The approach is backed by both empirical evidence and theoretical analysis, ensuring its robustness and effectiveness in practical applications."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "NeuroQuant is a cutting-edge post-training quantization method for variable-rate video coding that optimizes pre-trained neural networks for different bitrates without retraining."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "N/A\nActually, I am not very familiar with this field, so please have AE consider the opinions of other reviewers more."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- The encoding runtime for HiNeRV 3M (22 hours) looks significantly longer than reported in the original paper, even accounting for differences in GPUs. Additionally, the reported memory usage seems unusually high. Are there any differences in configuration, such as the use of FP16 precision in these experiments?\n- How much time is required to obtain additional rate points with models like NeRV, HNeRV, FFNeRV, and HiNeRV? Although these models require a pretraining phase, the pretrained model can be fine-tuned to produce multiple rate points by adjusting quantization levels or using entropy coding with regularization [1]. Fine-tuning time is substantially shorter than full training (e.g., 30 epochs for QAT versus 300 + 60 epochs for HiNeRV).\n- What is the computational cost (in terms of MACs and wall time) for the proposed calibration process compared to QAT?\n\n[1] Gomes, Carlos, et al. \"Video compression with entropy-constrained neural representations.\""
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The results look promising. Although NeuroQuant achieves only a marginal improvement over the current best INR-VC (-4.8%), it provides greater efficiency in obtaining multiple rate points.\n- The experiments comparing different quantization methods are comprehensive, which will be helpful for future work in this area."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors propose post-training quantization for INR-VCs, which achieves variable-rate coding more efficiently than existing methods that require retraining the model from scratch. The proposed model realizes variable bitrate by considering the sensitivity of the weights to quantization, while also incorporating better theoretical assumptions for INR-VC compared to other post-training quantization techniques. The proposed method demonstrates both improved RD performance and faster encoding speed."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- In Table 2, excluding the pretraining time for NeuroQuant does not seem appropriate. Even with NeuroQuant, pretraining is still required, and the current presentation may be misleading. The authors should consider reporting the pretraining and fine-tuning times separately for both the baseline models and NeuroQuant.\n- Similarly, the claim of an 8x encoding speedup is also misleading, as it excludes the pretraining time required for INR-VC encoding (even though NeuroQuant avoids full retraining for each rate point).\n- Variable/learnable quantization levels for INR-VC have been explored in related works [1,2,3], so the paper’s claim is inaccurate (e.g., line 43). These methods, which resemble the proposed mixed-precision quantization, also enable fine-tuning for multiple rate points from a single pretrained model (but with QAT). These methods should be discussed and compared in the paper.\n- For a fairer comparison, the comparisons to x264/x265 should avoid using the 'zerolatency' setting, as the INR-VCs in the paper inherently have non-zero latency.\n- For ablation study, more sequences should be use for obtaining a representative result.\n\n[1] Zhang, Yunfan, et al. \"Implicit neural video compression.\"\n[2] Gomes, Carlos, et al. \"Video compression with entropy-constrained neural representations.\"\n[3] Kwan, Ho Man, et al. \"Immersive Video Compression using Implicit Neural Representations.\""
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.\tI think that the RD performance of the PTQ method may be slightly inferior to QAT. Could you explain why the proposed PTQ method has better RD performance compared to FFNeRV/HiNeRV?\n2.\tCould you provide a detailed explanation of the Encoding Complexity section? Does it refer to the encoding complexity of a single bitrate or multiple bitrate?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Compared with existing quantization methods, the proposed method could achieve significant performance improvement, indicating the efficiency of the proposed method.\nThere are some highlights for the proposed post-training quantization method for INR-VC:\n1.\tA criterion for optimal sensitivity in mixed-precision INR-VC was proposed, enabling the allocation of different bitwidth to network parameters with varying sensitivities.\n2.\tThrough network-wise calibration and channel-wise quantization strategies, NeuroQuant minimize quantization-induced errors, arriving at a unified formula for representation-oriented PTQ calibration."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a novel post-training quantization approach designed for INR-VC called NeuroQuant that enables variable-rate coding without complex retraining. It redefines variable-rate coding as a mixed-precision quantization problem. Through network-wise calibration and channel-wise quantization strategies, NeuroQuant achieves SOTA performance compared to popular PTQ and QAT methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe authors did not provide a detailed explanation as to why the proposed PTQ method would be superior to QAT methods such as FFNeRV and HiNeRV.\n2.\tIn the Encoding Complexity section, the authors did not provide a detailed explanation of whether the acceleration brought by NeuroQuant is due to the absence of QAT optimization during training or because NeuroQuant does not require retraining for adjustments at different bitrates."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. I wonder why the improvement is such impressive, compared to the existing quantization approaches. Is the channel-wise quantization conducted on both weight and activation?\n1. It seems like the key point of this approach is to adopt a mix-precision one-block BRECQ on INR video coding models. Despite the story about the unique properties of non-generalized INR models, can we further develop the method to a more general MP PTQ with calibration?\n2. Is this inter-layer independence a good property for evaluated INR models, or an ill pose? I would appreciate if the authors provide some insight about this property."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is well-written and easy to follow. The authors clearly explain their motivation for adopting mix-precision PTQ for variable-rate INR video coding. The experimental results are impressive, with significant PSNR improvements (*e.g.* 0.2 db @6bit and 3 db @2bit for NerV) on all the experimental settings."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors aim to introduce the variable rate control for INR-based video coding, by simply using PTQ. Therefore, they investigate PTQ approaches with mixed precision on those INR models. They first validate a weak layer independence in such non-generalized INR models. This challenges Hessian-based quantization methods, as they often follow this assumption and adopt diagonal Hessians. Then the authors propose a perturbation-based approach to estimate the intractable Hessian-involved sensitivity criterion (Omega) in the section with eq.9 and eq.10. Therefore, they can perform bit allocation for mix-precision quantization. Then the authors adopt network-wise calibration to further decrease the quantization error. The proposed approach named NeuroQuant achieves a cutting-edge performance w.r.t. both single QP and the whole RD curve."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The comparison in Table 1 may be unfair. If I understand correctly, the proposed approach involves mix-precision quantization, which helps bit allocation among layers. Therefore, it introduces extra quantization step parameters (${s}$ in eq.17) to store. I wonder whether the bpp calculation in Fig.4 and Fig.5 considers this quantization parameter. On the other hand, AdaQuant and BRECQ are fix-precision methods so this parameter can be omitted. The authors should clarify their evaluation details, especially the calculation of bpp.\n1. The calibration objective derivation in section 3.2 is similar to the *Network-wise Reconstruction* situation discussed in the existing BRECQ paper (Li et al. 2021b, section 3.2). And the authors are also aware of this prior approach. Intuitively, intra-network independence can be seen as one-block intra-block independence, and BRECQ covers this. In behavior, both the calibration methods adopt an MSE-form objective. Thus, I cannot easily recognize those analyses as the contribution of this paper. I request the authors further clarify their contribution against the existing approaches.\n2. It would be better to provide more intuitive explanations of the proposed approach, e.g. diagram figures and pseudo algorithm. Considering that not all experts in the video coding community are familiar with model quantization, the math formulations are somewhat confusing."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024on,\ntitle={On Quantizing Neural Representation for Variable-Rate Video Coding},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=44cMlQSreK},\nnote={under review}\n}"
},
"abstract": {
"value": "This work introduces NeuroQuant, a novel post-training quantization (PTQ) approach tailored to non-generalized Implicit Neural Representations for variable-rate Video Coding (INR-VC). Unlike existing methods that require extensive weight retraining for each target bitrate, we hypothesize that variable-rate coding can be achieved by adjusting quantization parameters (QPs) of pre-trained weights. Our study reveals that traditional quantization methods, which assume inter-layer independence, are ineffective for non-generalized INR-VC models due to significant dependencies across layers. To address this, we redefine variable-rate INR-VC as a mixed-precision quantization problem and establish a theoretical framework for sensitivity criteria aimed at simplified, fine-grained rate control. Additionally, we propose network-wise calibration and channel-wise quantization strategies to minimize quantization-induced errors, arriving at a unified formula for representation-oriented PTQ calibration. Our experimental evaluations demonstrate that NeuroQuant significantly outperforms existing techniques in varying bitwidth quantization and compression efficiency, accelerating encoding by up to eight times and enabling quantization down to INT2 with minimal reconstruction loss. This work introduces variable-rate INR-VC for the first time and lays a theoretical foundation for future research in rate-distortion optimization, advancing the field of video coding technology."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Variable Rate",
"Video Coding",
"Quantization",
"Neural Representation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/96412127d05f6be93a968423bb1d7564ca041f0c.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "On Quantizing Neural Representation for Variable-Rate Video Coding"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
44hcrfzydU | FedTMOS: Efficient One-Shot Federated Learning with Tsetlin Machine | main | Active | Efficient Federated Learning;One Shot Federated Learning;Tsetlin Machine | other topics in machine learning (i.e., none of the above) | 3;5;5 | 3;5;4 | 2;2;3 | 2;3;3 | 1;2;2 | 4.333333 | 4 | 2.333333 | 2.666667 | 1.666667 | 0.866025 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1) The authors mention using a standard compute node for evaluating server side latency. Does this mean that the node was GPU equipped? It would be unfair to measure the latency of DNN based approaches without using a GPU equipped node."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The application of Tsetlin Machines to OFL is novel and offers an interesting alternative to standard KD based methods which are compute intensive\n- The method is data-free\n- The authors provide comprehensive evaluations on communication and compute efficiency alongside accuracy which showcase the strength of the approach"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents FedTMOS, a compute efficient one-shot Federated Learning (FL) algorithm that leverages Tsetlin Machines. Tsetlin Machines present an alternative to DNNs, known for their low complexity, compute and storage efficiency along with good performance. FedTMOS learns client-specific TMs and derives an aggregated server side TM that enhances class distinction. The aggregation procedure is significantly cheaper than traditional KD based methods while being data-free. The authors show comprehensive empirical results on standard OFL benchmarks under non-IID data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper can be improved on several fronts as listed below:\n1) The paper offers no discussion on the limitations of Tsetlin Machines and its broader applicability. While TMs are an evolving research area, DNNs are the norm today. Thus, an elaborate discussion of its current limitations will strengthen the paper by well informing the community on its wider applicability. For instance, can TMs be applied to NLP based tasks such as those based on transformer models as of today? \n2) A significant portion of the proposed algorithm in Section 4 is explained in sentences, making it difficult to follow without using mathematical references to the quantities being discussed. For instance, equation (4) describes general k-means clustering without reference to actual scaled weights which are being clustered. Section 4.2.2 uses no mathematical expressions to describe the proposed algorithm. The paper can be greatly improved by defining appropriate notation for quantities being referred to at the beginning of Section 4 and then using this notation throughout while explaining the proposed approach. \n3) The paper misses an important baseline, FedFischer [1] which is more compute efficient on the server side as compared to the KD based methods and offers strong accuracy. In general, the paper misses related work involving averaging based schemes such as OT-Fusion [2] and RegMean [3] which offer low server side latency. \n4) Lack of theory to justify the performance improvements as compared to the evaluated baselines. Can the authors provide more insights into the accuracy improvements achieved?\n5) With the increasing availability of large pre-trained models, conducting OFL starting from a pre-trained initialization is shown to significantly improve performance [1]. How can a TM incorporate pre-trained weights from other TMs trained on large datasets?\n\n[1] Jhunjhunwala, Divyansh, Shiqiang Wang, and Gauri Joshi. \"FedFisher: Leveraging Fisher Information for One-Shot Federated Learning.\" International Conference on Artificial Intelligence and Statistics. PMLR, 2024.\n\n[2] Singh, Sidak Pal, and Martin Jaggi. \"Model fusion via optimal transport.\" Advances in Neural Information Processing Systems 33 (2020): 22045-22055.\n\n[3] Xisen Jin, Xiang Ren, Daniel Preotiuc-Pietro, and Pengxiang Cheng. Dataless knowledge fusion by merging weights of language models. In The Eleventh International Conference on Learning Representations, 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Why do authors typically introduce Testlin Machine, which is an automation machine rather than leveraging a general reinforcement learning scheme where penalty, reward, and stage changing are involved? What is the motivation for doing so? How different is the solution with general reinforcement learning based one-shot FL? e.g. \n\n2 It is not common to use Gini index to measure data distribution. There are more common solutions. For example, the simplest way is Gaussian Model. But it is possible that clients data are non-i.i.d. In that case, a simple solution is to do some sampling. In some semi-supervised federated learning, uploading hard or soft labels is also fine. Choosing Gini index is neither a straightforward nor a trivial option. How did you come up with that? And why can authors benefit from that?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The idea of introducing Testlin Machine into one-shot federated learning is innovative, aiming to solve the bottleneck of using public datasets.\n\n2. The authors clearly described the background, laying emphasis on Testlin Machine, making the paper self-contained.\n\n3. The authors evaluated the solutions over client numbers of a certain scale, e.g. 20, 50, 80, which is a critical factor in one-shot federated learning."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors leveraged Testlin Machine to resolve the bottleneck in one-shot federated learning, saving the communication cost and reducing the necessity of using a public dataset. The proposed solution views the one-shot federated learning in a different prospective, in the form of automation machines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The reviewer acknowledges the innovation of introducing Testlin Machine, however the motivation for doing so is not well explained. The authors spent certain paragraphs describing Automation Machine and the mechanisms in machine learning. Nevertheless, how such a mechanism can benefit machine learning and federated learning is not illustrated. Moreover, why the key bottleneck in one-shot federated learning can be resolved is not explained. In other words, the current solution looks like converting a conventional question into a mechanism of a automation machine. For example, it likes a task converting a coding task into Moore Machine in algorithm lectures.\n\n2 Many choices of approaches are not well justified. See more details in the reviewer's questions.\n\n3. The empirical evaluation can be improved. The authors claimed that they used various datasets. However, these are very basic datasets like MNIST,SVHN, and CIFAR10. The reviewer suggested using more complex datasets such as Tiny-ImageNet. For datasets like MNIST, even if we are not doing one-shot federated learning, few epochs and communication rounds are needed to achieve convergence. The effectiveness, particularly in terms of convergence and accuracy, can be correctly justified by using a more complex dataset. \n\nOther minor writing issues:\n1. The acronym in the paper is not of common use. OFL is not a common usage for one-shot federated learning. Directly saying one-shot FL is fine. TM is usually referred to Turing Machine.\n2. Table 1 and Table 4 are out of bounds."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "- Since FedTMOS uses a non-DNN model, is its scalability being limited by Tsetin Machine? Can it achieve comparable performance when other baseline methods employ stronger networks (e.g., ResNet) on challenging datasets (e.g., Tiny-ImageNet)?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- Employing Tsetlin Machine in one-shot federated learning is interesting.\n- The proposed FedTMOS significantly reduce the communication costs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper propose FedTMOS for efficient one-shot federated learning (FL). FedTMOS employs Tsetlin Machine instead of DNNs to reduce upload costs and presents a novel data-free solution to generate server model. Experimental results show that FedTMOS outperforms existing one-shot FL methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- It is unclear whether the performance improvement in Table 1 comes from the performance gap between the CNNs and CTM. It is suggestted to report the performance of CNNs and CTM in a centralized(non-federated learning) setting.\n- My main concern with this work is its applicability, as it is limited to a specific machine learning model. In my view, machine learning models and tasks should primarily serve as a testbed for evaluating federated learning algorithms. They should not be restricted to particular models, unless exploring new applications of federated learning in emerging areas, such as diffusion models or large language models. However, this paper addresses a well-established image classification task and is effective only for the Tsetlin Machine, which limits its practical application.\n- The readability of this paper can be further improved. For instance, in line 146, what does the $j$ of $L_j$ stand for, and how to get the definition of the $L_j$ from the definition of $L$?"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose using Tsetlin Machine for efficient one shot FL without the need for server-side training"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024fedtmos,\ntitle={Fed{TMOS}: Efficient One-Shot Federated Learning with Tsetlin Machine},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=44hcrfzydU},\nnote={under review}\n}"
},
"abstract": {
"value": "One-Shot Federated Learning (OFL) is a promising approach that reduce communication to a single round, minimizing latency and resource consumption. However, existing OFL methods often rely on Knowledge Distillation, which adds a training phase and increases server-side latency. Their performance can also be compromised by the quality of generated data or public datasets, resulting in sub-optimal server models. To address these challenges, we proposed One-Shot Federated Learning with Tsetlin Machine (FedTMOS), a novel data-free OFL framework built upon the low-complexity and class-adaptive properties of the Tsetlin Machine. FedTMOS first clusters then reassigns class-specific weights to form models using an inter-class maximization approach, generating balanced and efficient server models without requiring additional training. Our extensive experiments demonstrate that FedTMOS significantly outperforms its ensemble counterpart by an average of $8.30\\%$, and the leading state-of-the-art OFL baselines by $4.21\\%$ across various datasets. Moreover, it achieves a reduction in server latency by $7.5-45\\times$ and upload communication costs by at least $2.3\\times$, establishing FedTMOS as a highly efficient solution for OFL."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Efficient Federated Learning",
"One Shot Federated Learning",
"Tsetlin Machine"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/eb529955c8d8fe7ccee549b30cc45763ecb330c1.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "FedTMOS: Efficient One-Shot Federated Learning with Tsetlin Machine"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
44pbCtAdLx | I-LLM: Efficient Integer-Only Inference for Fully-Quantized Low-Bit Large Language Models | main | Active | LLM Quantization;Large Language Models;Neural Network Compression | foundation or frontier models, including LLMs | 3;3;5;6;6 | 4;4;3;3;4 | 3;2;2;3;4 | 2;2;2;3;3 | 1;2;3;3;3 | 4.6 | 3.6 | 2.8 | 2.4 | 2.4 | -0.541736 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I have few questions and comments as below: \n\n1. In Table 1, the results for SmoothQuant under the W4A4 setting (e.g., 1.8e4 for the OPT family) are unusually high, especially compared to LLaMA models. This discrepancy should be explained.\n\n2. The experimental setup is unclear, especially regarding Table 4, where latency and speedup for traditional W4A4 are reported. What framework was used, and was Tensor Core applied?\n\n3. To better understand the efficiency of integer-only quantization, comparisons with other quantization works like QServe [1] and Atom [2] should be included.\n\n4. In the quantization algorithms, how are the scale factor and zero-point stored? If they are stored as integers, does this significantly impact accuracy? A discussion on this trade-off is needed.\n\n\n[1].QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving\n[2].Atom: Low-bit quantization for efficient and accurate llm serving"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This paper addresses the challenge of integer-only quantization, which is often overlooked by existing work as typical LLM quantization methods usually store intermediate results as floating-point values.\n\n2. The paper introduces innovative integer-only operators, such as DI-Exp, DI-ClippedSoftmax, and DI-Norm, to replace computationally intensive floating-point operations.\n\n3. The experimental section includes a thorough comparison across multiple model types and configurations, as well as an ablation study, demonstrating the framework’s efficiency."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes an integer-only post-training quantization (PTQ) framework to accelerate the inference of large language models, called I-LLM. The authors introduce three main techniques: Fully-Smooth Block Reconstruction (FSBR), which reduces inter-channel activation disparities; Dynamic Integer-only MatMul, enabling dynamic quantization and integer-only matrix multiplication; and integer-only non-linear operators such as DI-ClippedSoftmax and DI-Exp, which use bit-shifting for efficiency. Experimental results show that I-LLM achieves accuracy on par with floating-point baselines while delivering significant improvements in computational performance and memory efficiency."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The motivation for this work is not clearly explained, especially regarding why integer-only quantization is necessary. The trade-off between accuracy and inference performance needs more discussion. Additionally, the configuration of different quantization types for weights and activations (e.g., W8A8, W4A8, W4A4) is not discussed.\n\n2. The experimental setup lacks clarity, and more results on inference performance are needed. See detailed comments 1-3.\n\n3. The innovation of the Fully-Smooth Block Reconstruction method is limited, as it closely resembles SmoothQuant. Additionally, the overhead of dynamic quantization should be demonstrated in the experimental results."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- What framework and baseline setup were used for W4A4 quantization, and can more detail be provided about the experimental environment?\n- The description of FSBR is unclear. Could the authors provide a more explicit breakdown of its application across different computing units beyond MatMul?\n- The authors also emphasize the outliers across the token (which sounds new), but Fig. 3 shows that the token-wise distribution looks much flatter than the channel-wise distribution, which is well known."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The integer-only quantization of non-linear operations offers potential efficiency improvements for hardware without FP support.\n- Demonstrates notable performance in W4A4 quantization settings, showing significant reductions in latency and memory usage on specific LLMs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes I-LLM, an integer-only post-training quantization (PTQ) framework for large language models (LLMs), aiming to improve inference efficiency on hardware lacking floating-point support. The key contributions include the Fully-Smooth Block-Reconstruction (FSBR) to smooth inter-channel variations in activation and dynamic quantization methods (Dynamic Integer-only MatMul and others). Experiments on several LLMs, such as LLaMA and OPT, demonstrate improved modest speed and memory usage under W4A4 settings, with minimal accuracy loss compared to floating-point (FP) models. Despite these claims, the methodology largely extends known techniques, showing limited novelty, and lacks sufficient evaluation against state-of-the-art (SOTA) approaches on challenging quantization settings."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The proposed techniques, such as FSBR, heavily build on prior works like OmniQuant for quantization-parameter tuning and I-VIT for dynamic quantization, limiting originality.\n- The absence of comprehensive SOTA comparison limits the rigor of performance claims, particularly missing comparisons with rotation-based methods (e.g., SpinQuant (Liu et al., 2024)) and LUT-based approximations (e.g., NN-LUT(Yu et al., 2022)).\n- FSBR and DI-MatMul introduce computational overhead with on-the-fly operations like 8-bit scalar multiplication/division, yet no detailed ablation study quantifies the latency impact per Transformer component.\n- The evaluation datasets and tasks are limited, and broader testing across more diverse and challenging benchmarks is required to substantiate the generalization of results."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Detail more in depth the steps needed in training."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "First method that covers all the steps of integer inference.\nPresents implementation results, showing significant speedup even in non-adhoc designed hardware\nend-to-end results in inference are presented, not only sub-steps"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper present an end-to-end quantization method that is applied to all the inference sub-steps in LLMs, including non-linear operations and all attention calculations. The technique presented leverages block and dynamic quantization concepts. Specific integer approximation for non-linearities are presented to avoid moving o fp when computing them, End-to-end results are presented, and an implementation of the method on GPU is also presented, showing significant speedups."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "It seems the method is not post-training (the smoothing coefficients are learned), hence it is not applicable for the largest LLMs for which training is not really easy to repeat. \nNotation is hard to follow and could be simplified to get better intuition on the methods applied. \nApproximation proposed for non-linear functions are not really well justified\nSimilar ideas are present in I-BERT, I-ViT, BRECQ. Thus, the contribution of this paper seems to be incremental (see BRECQ: Pushing the Limit of Post-Training Quantization by Block Reconstruction)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "NA"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The authors provide valuable insights into the quantization of large language models (LLMs), demonstrating how such models can maintain high accuracy despite the reduced precision."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This study introduces a novel quantization approach for large language models (LLMs), featuring custom-designed modules: DI-MatMul, DI-ClippedSoftmax, DI-Exp, and DI-Normalization. These modules collectively outperform state-of-the-art (SOTA) methods, offering enhanced performance in LLM applications."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Consider combining quantization with pruning to achieve enhanced model efficiency and reduced computational overhead.\nConsider a hybrid quantization approach, where different layers utilize varied precision levels, such as W4A4 for certain layers and W8A8 for others, to balance efficiency and performance."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Please refer to Weaknesses"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The proposed framework enables integer-only quantization for LLMs: by completely avoiding floating-point operations, I-LLM takes an important step toward making LLMs deployable on edge devices, achieving faster and more efficient inference on hardware without the need for floating-point support.\n\n2. The authors conducted extensive experiments across various LLM architectures, model sizes, and datasets, with the table data from the manuscript demonstrating overall outstanding performance.\n\n3. This paper introduces techniques like FSBR and DI-MatMul that optimize LLM quantization accuracy by addressing variations across channels and tokens. These techniques help maintain high precision during the inference process."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces I-LLM, a novel integer-based post-training quantization (PTQ) framework for large language models (LLMs). Traditional PTQ techniques involve a mix of integer and floating-point operations, which limits deployment on edge devices that lack floating-point capabilities. The authors propose three techniques: Full-Smooth Block Reconstruction (FSBR) to smooth activation variations across channels, Dynamic Integer Matrix Multiplication (DI-MatMul) to manage variations between tokens, and dynamic integer implementations for nonlinear operations. Experiments show that I-LLM achieves comparable post-compression performance with existing methods while significantly reducing computational overhead."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Section 3.2 of the paper proposes 'training a smoothing coefficient for all activations and weights to aid in restoring the model’s quantization accuracy.' However, the training and solving process of this coefficient is not discussed, which could lead to confusion and misunderstandings. If the reason for not detailing this part is that the method aligns with SmoothQuant or OmniQuant, it should be explicitly cited and clearly explained.\n\n2. Equations (1) and (2) extend SmoothQuant by applying smoothing to 'NonLinear Act-Smooth.' However, the motivation of SmoothQuant is to reduce the difficulty of quantization by lowering the smoothness of activations through a smoothing coefficient. In Equations (1) and (2), the smoothing coefficient is counterbalanced between the Gate-layer and Up-layer using '*s' and '/s', respectively. The paper does not discuss the rationale behind this operation or why 'W' scale is 'times' while the 'V' is 'division'. \n\n3. The definition of $\\sigma$ in Equation (2) is confusing. Based on Equation (2) and line 262, it follows that $\\sigma'(x1) = \\sigma(x1 / s)$, and $\\sigma'(x1') = \\sigma(x1' / s)$. So we can get $\\sigma(x1' / s) = \\sigma(x1 / s)$, i do believe this is not a right equation or hope the author can make a clarification on this.\n\n4. The authors state in line 270 that 'SmoothQuant and OmniQuant are subsets of FSBR.' However, based on the description in this section, it appears that FSBR actually adopts the techniques of SmoothQuant and OmniQuant and extends them within 'NonLinear Act-Smooth.' Referring to them as subsets is inaccurate and could lead to misunderstandings.\n\n5. I have carefully reviewed and evaluated the anonymous code repository provided by the authors. Regarding the DI-MatMul computation process mentioned in Section 3.3 (specifically in *quantize/quant_modules/QuantMatMul, quantize/quantizer), its implementation and definition of 'dynamic' is consistent with the OmniQuant codebase. If there are any omissions or misunderstandings, I would appreciate further clarification from the authors.\n\n6. Since the quantizer used in the paper results in the same post-quantization weight bit-width and additional quantizer parameters (scaling factor, zero factor) as methods like OmniQuant, the specific factors behind the reduction in **Weight Memory** listed in Table 4 for I-LLM are not clearly discussed. It would be helpful if the authors could clarify which specific parameter compression or operation contributes to this memory efficiency advantage.\n\n7. In table 5, 'OmniquantQuant'->'Omniquant', 'CLippedSoftamx'->'CLippedSoftmax'"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024illm,\ntitle={I-{LLM}: Efficient Integer-Only Inference for Fully-Quantized Low-Bit Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=44pbCtAdLx},\nnote={under review}\n}"
},
"abstract": {
"value": "Post-training quantization (PTQ) serves as a potent technique to accelerate the inference of large language models (LLMs). Nonetheless, existing works still necessitate a considerable number of floating-point (FP) operations during inference, including additional quantization and de-quantization, as well as non-linear operators such as RMSNorm and Softmax. This limitation hinders the deployment of LLMs on the edge and cloud devices. In this paper, we identify the primary obstacle to integer-only quantization for LLMs lies in the large fluctuation of activations across channels and tokens in both linear and non-linear operations. To address this issue, we propose I-LLM, a novel integer-only fully-quantized PTQ framework tailored for LLMs. Specifically, (1) we develop Fully-Smooth Block-Reconstruction (FSBR) to aggressively smooth inter-channel variations of all activations and weights. (2) to alleviate degradation caused by inter-token variations, we introduce a novel approach called Dynamic Integer-only MatMul (DI-MatMul). This method enables dynamic quantization in full-integer matrix multiplication by dynamically quantizing the input and outputs with integer-only operations. (3) we design DI-ClippedSoftmax, DI-Exp, and DI-Normalization, which utilize bit shift to execute non-linear operators efficiently while maintaining accuracy. The experiment shows that our I-LLM achieves comparable accuracy to the FP baseline and outperforms non-integer quantization methods. For example, I-LLM can operate at W4A4 with negligible loss of accuracy. To our knowledge, we are the first to bridge the gap between integer-only quantization and LLMs."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"LLM Quantization",
"Large Language Models",
"Neural Network Compression"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/fd15aefa1a4fb59d97f89586195fb9f8c6303d64.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "I-LLM: Efficient Integer-Only Inference for Fully-Quantized Low-Bit Large Language Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
44z7HL4mfX | Instruct-SkillMix: A Powerful Pipeline for LLM Instruction Tuning | main | Active | instruction tuning;high quality synthetic data;diverse synthetic data | foundation or frontier models, including LLMs | 3;5;6 | 4;4;4 | 2;2;4 | 2;2;4 | 3;3;3 | 4.666667 | 4 | 2.666667 | 2.666667 | 3 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Have you investigated whether different model architectures or sizes hit the 4K example ceiling at different points?\n\nCould you explain the choice of k=2 for skill combinations? Have you explored other values?\n\nHow would the method perform with different teacher models (e.g., Claude, PaLM)?\n\nWould it be possible that combining synthetic data with human annotations potentially break through the 4K example ceiling?\n\nCould you elaborate on potential approaches for quality control in the data generation process?\n\nCould you provide analysis of model performance on longer-form tasks and multi-turn conversations?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper presents a novel approach to synthetic data generation that achieves strong results with only 4K examples, suggesting an efficient path forward for instruction tuning. The empirical validation is well-designed, testing across multiple benchmarks and models while including careful ablation studies that isolate the effects of different components. \n\nThe method is cost-effective, requiring only about $600 compared to traditional human annotation approaches. \n\nThe authors provide some analysis of how low-quality data affects model performance, offering practical insights for dataset creation. \n\nThe paper also test both their preferred method and a seed-dataset dependent variant, providing comparative insights."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces INSTRUCT-SKILLMIX, a pipeline for creating instruction-tuning datasets using large language models. The method involves two stages: (1) extracting instruction-following skills using an LLM's metacognitive abilities, and (2) generating synthetic (instruction, response) pairs using random combinations of these skills. Using just 4K examples, the authors demonstrate that base models fine-tuned on this data achieve competitive performance on instruction-following benchmarks compared to much larger models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper's most significant limitation is the performance plateau at 4K examples, with no clear explanation or analysis of learning curves as dataset size increases. This is compounded by limited investigation of whether different architectures or model sizes might hit different ceilings. \n\nThe evaluation methodology relies heavily on AlpacaEval 2.0 and lacks assessment of long-form generation and multi-turn conversations. The use of both teacher and grader models from the same model family (GPT-4) raises concerns about potential systematic biases. \n\nAlso, the methodology lacks a principled approach for determining the optimal number of skills or combinations, and provides no systematic quality metrics for the generated data. \n\nThe paper provides limited investigation of how different teacher models might affect results. Lack of this raises questions about the method's generalizability. \n\nThe relationship between skills and model performance remains inadequately explored, with no clear metrics for assessing skill quality or coverage."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "NA"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Can you compare performance to ShareGPT with the responses regenerate with GPT4-Turbo?\n- Can the authors discuss weakness 3?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- This paper shows very strong performance on benchmarks where LLMs are used as a judge.\n- The InstructSkillMix framework is novel and interesting. Moreover, it does not require any seed data, which is beneficial."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose a new instruction tuning data generation pipeline, INSTRUCT-SkillsMIX. They prompt a strong LLM to identify some key instruction following skills. They then use these skills to produce useful instruction following data. They show strong results on instruction following benchmarks where an LLM is used as a judge."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The baseline methods are not fair: the main comparison is to Alpaca 52K, which is really old and known to be a low quality dataset. I think the authors should try comparing their dataset to stronger datasets such as ShareGPT with the responses regenerated by GPT4-Turbo.\n- In my opinion, section 1.1 is somewhat misleading. The authors (in line 70-75) say it is a mystery why public instruction tuning does not match the performance of proprietary instruct models. However, these proprietary models are trained in a variety of stages, and with distillation and RL techniques. It is not expected that instruction tuning alone can match the performance of proprietary models.\n- Table 4 suggests that the main reason for the good performance of InstructSkillMix is that a stronger model is used for distillation compared to previous IFT datasets. With the same judge, Alpaca-1K longest performs similarly to InstructSkill Mix (although Alpaca 1K longest is a weak dataset in my opinion: instructions are created using text-davinci-003). Alpaca-1K longest does perform worse on the length-controlled benchmark, but this is not a fair comparison since Alpaca-1K longest is specifically biased to encourage longer responses. \n- The authors claim that InstructSkillMix is a more performant data generation method than UltraChat and Vicuna (line 511). Although InstructSkillMix seems to be a strong method, my guess is that the primary reason that InstructSkillMix outperforms UltraChat and Vicuna is due to a stronger teacher model. If the authors want to make this claim, I think they should regenerate responses from UltraChat and ShareGPT using the same teacher model InstructSkillMix uses.\n- In my opinion relying only on AlpacaEval 2.0 and MTBench is a bit limited. It would be beneficial for the authors to evaluate their models on other tasks, including mathematical reasoning (MATH, GSM8K), instruction following (IFEval), and knowledge benchmarks (MMLU)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Kindly refer to the weaknesses."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- It finds that directly prompting a strong LLM to identify crucial skills achieves better performance than extracting skills from existing IFT datasets.\n- The performance of using merely thousands of Instruct-SkillMix data is impressive.\n- The data generation pipeline is fully automated and has nearly no human intervention.\n- It conducts detailed ablation studies and shows the contributions of different components.\n- It reveals that even a small amount of low-quality data greatly harms the instruction following performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on constructing high-quality data for enhancing base LLMs’ instruction following capability. To achieve this goal, the authors propose a novel pipeline, called Instruct-SkillMix, which naturally combines diversity and difficulty in the data generation process. SFT with the resulting SFT dataset leads to very impressive performance gains on several established benchmarks. In particular, the data generation is fully automatic and the size of dataset can scale up easily."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The type of queries and topics could be relevant to coverage of data. I think it might be worth to do ablation study on the query and topic types."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce an automated approach for creating diverse, high quality SFT data for instruction-following."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024instructskillmix,\ntitle={Instruct-SkillMix: A Powerful Pipeline for {LLM} Instruction Tuning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=44z7HL4mfX},\nnote={under review}\n}"
},
"abstract": {
"value": "We introduce INSTRUCT-SKILLMIX, an automated approach for creating diverse, high quality SFT data for instruction-following. The pipeline involves two stages, each leveraging an existing powerful LLM: (1) Skill extraction: uses the LLM to extract core “skills” for instruction-following by directly prompting the model. This is inspired by “LLM metacognition” of (Didolkar et al., 2024); (2) Data generation: uses the powerful LLM to generate (instruction, response) data that\nexhibit a randomly chosen pair of these skills. Here, the use of random skill combinations promotes diversity and difficulty. The estimated cost of creating the dataset is under $600. \n\nVanilla SFT (i.e., no PPO, DPO, or RL methods) on data generated from INSTRUCT-SKILLMIX leads to strong gains on instruction following benchmarks such as AlpacaEval 2.0, MT-Bench, and WildBench. With just 4K examples, LLaMA-3-8B-Base achieves 42.76% length-controlled win rate on AlpacaEval 2.0, a level similar to frontier models like Claude 3 Opus and LLaMA-3.1-405B-Instruct. Ablation studies also suggest plausible reasons for why creating open instruction-tuning datasets via naive crowd-sourcing has proved difficult. In our dataset,adding 20% low quality answers (“shirkers”) causes a noticeable degradation in performance.\n\nThe INSTRUCT-SKILLMIX pipeline seems flexible and adaptable to other settings."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"instruction tuning",
"high quality synthetic data",
"diverse synthetic data"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/c5e0b7a04c93de0fb8c615e2c454e0551d7f7b5b.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Instruct-SkillMix: A Powerful Pipeline for LLM Instruction Tuning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
45FzVIdA3T | EDM: Equirectangular Projection-Oriented Dense Kernelized Feature Matching | main | Active | omnidirectional image;image matching;feature matching;dense matching | applications to computer vision, audio, language, and other modalities | 3;5;6;6 | 4;3;4;4 | 2;2;3;3 | 2;2;3;3 | 2;2;3;3 | 5 | 3.75 | 2.5 | 2.5 | 2.5 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* In Fig. 10, which is the results of baseline methods and the proposed methods?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* Utilize Gaussian Process regissin and spherical positional embedding to establish 3D correspondences between different frames.\n* The refinement for geodesic flow could enhance the performance.\n* The proposed method achieves better performance than baseline methods on various datasets."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes EDM, a learning-based dense match algorithm for omnidirectional images. Specifically, a spherical positional embeddings based 3D cartesian coordiantes and a bidirectional transformations are used to enhance the performance. The experiments on various datasets show its effectivenss."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The novelty of the proposed method is limited since all used modules are proposed in existing methods.\n* There are too many words used to describe the selected datasets in Sec. 5.1, which is not necessary.\n* There are few visual results about the baseline methods and the proposed methods.\n* The baselin methods do not consist of EgoNeRF in Tables 1, and 2, the most recentl method about this task.\n* There is no efficient analysis about the proposed and baseline methods."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Q1) The reviewer is curious about whether the proposed positional encoding and refinement strategy can be applied to other dense matching methods, such as ROMA."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "S1) The reviewer thinks the proposed method is reasonable and effective, which leverages the geometrical property of sphere images to improve the performance of dense matching.\nS2) The paper is clearly written and well-organized. The proposed method is well-explained and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a new method for dense sphere image matching. Previous perspective image matching methods like DKM, and ROMA perform poorly when directly used for matching spherical images due to the severe image distortion. The method proposed in this paper solves this problem by introducing the spherical positional encoding into the coarse global matching of ROMA, and a refinement strategy that regresses offset on the sphere. The proposed method achieves state-of-the-art performance on the dense sphere image matching task and outperforms previous sphere matching methods and perspective matching methods by a large margin."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "W1) Experiments. It seems that the results of baseline DKM and ROMA in Tabel 1, and 2 are obtained using their pre-trained checkpoints. However, the reviewer thinks including their results trained using the same sphere image dataset as the proposed method would be more convincing."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "My main concern lies is the technical contribution. The positional encoding from the spherical camera model lacks in-depth exploration. Architectural or formulation designs for information exchange and matching between distorted images, as well as a more general data augmentation strategy, would be more inspiring."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This paper combines DKM framework with omnidirectional images by considering the spherical camera model in positional encoding and correspondence optimization. It achieves dense matching omnidirectional images for the first time and achieves SoTA performances across multiple datasets. \n\n2. The paper is well-written and designs detailed ablation experiments to verify the proposed designs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper extends the DKM method to panoramic image registration, achieving improvements across multiple datasets. However, the main contribution lies in (somewhat simple) considerations of the spherical camera model within positional encoding and matching optimization. It does not introduce new insights for the matching task itself, yielding limited technical innovation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. For the reviewer, the innovation of this paper does not meet the standards of ICLR. The core algorithm is derived from DKM, introducing several optimizations for omnidirectional images (such as coordinate representation or transformation based on the spherical camera model). Although the authors demonstrated in Table 3 that these optimizations significantly improve performance over DKM in omnidirectional image matching, the work does not achieve a breakthrough in the dense matching framework, limiting its potential for broader insights and inspiration.\n\n2. The proposed rotation augmentation strategy is designed specifically for vertically fixed cameras, suitable for indoor scenes in Matterport3D and Stanford2D3D. However, such a strategy falls short in scenarios involving extreme rotations or complex outdoor environments."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- In e.g. Figure 7, it can be seen that the model is quite certain about the floor. The reviewer is not certain to understand how this is possible. It's also seemingly pretty certain in the bottom example of a cupboard that does not seem to be covisible. The reviewer is wondering if the authors could explain a bit more about the confidence (under/over) of the model.\n- The reviewer did not find a definition of AUC in the paper. Is it AUC of the relative pose error as in most matching works? Could be good to include in the appendix, especially as ICLR readers may be less familiar with the topic."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The writing of the paper was in most cases clear and easy to follow, in particular the reviewer found the figures helpful in illustrating the method.\n- The proposed approach is simple and does not incur additional computational costs.\n- EDM performs well on Matterport3D (which is also used for training), and Stanford2D3D (which indicates that it also generalizes). The authors have also taken care to evaluate previous sphere-based matchers on their new proposed benchmarks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors present an extension of the recent dense feature matcher DKM for spherical images. The main contribution is predicting the matches on the sphere instead of on a normalized cartesian grid."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- EDM seems to be less robust on EgoNeRF and OmniPhotos. However, no quantitative comparison was done for those datasets. It would have been interesting to see how EDM fares against SphereGlue there.\n- Some design choices were not clear to me. For example, on line 261-262 it is stated that linear refinement on the sphere is impossible, so it must be projected to equirectangular 2D space before the refinement. To me, an obvious ablation would be to compare this to a simple projection operator, i.e. $\\hat{u}^{\\ell} = {\\rm normalize} (\\hat{u}^{\\ell+1} + \\triangle \\hat{u}^{\\ell+1})$.\n- Possibly minor complaint. The authors work in the DKM framework where global matching is seen as a coordinate regression problem, however it can also be seen simply in terms of dense correlation between the features (where the network would not need to \"see\" any embeddings). It would have been nice to see a comparison to such an approach (i.e. instead either regressing the correlation vector as in e.g. PDCNET, or using a cross-view Transformer as in LoFTR.)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024edm,\ntitle={{EDM}: Equirectangular Projection-Oriented Dense Kernelized Feature Matching},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=45FzVIdA3T},\nnote={under review}\n}"
},
"abstract": {
"value": "We introduce the first learning-based dense matching algorithm, termed Equirectangular Projection-Oriented Dense Kernelized Feature Matching (EDM), specifically designed for omnidirectional images. Equirectangular projection (ERP) images, with their large fields of view, are particularly suited for dense matching techniques that aim to establish comprehensive correspondences across images. However, ERP images are subject to significant distortions, which we address by leveraging the spherical camera model and geodesic flow refinement in the dense matching method. To further mitigate these distortions, we propose spherical positional embeddings based on 3D Cartesian coordinates of the feature grid. Additionally, our method incorporates bidirectional transformations between spherical and Cartesian coordinate systems during refinement, utilizing a unit sphere to improve matching performance. We demonstrate that our proposed method achieves notable performance enhancements, with improvements of +26.72 and +42.62 in AUC@5° on the Matterport3D and Stanford2D3D datasets, respectively."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"omnidirectional image",
"image matching",
"feature matching",
"dense matching"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/4bc6a76d11bea7d06f42c53010b7306031999926.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/5bec39d603a5577c2d91716a1a975199b61a9c9a.zip"
},
"title": {
"value": "EDM: Equirectangular Projection-Oriented Dense Kernelized Feature Matching"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
45rvZkJbuX | Cross-Modal Safety Mechanism Transfer in Large Vision-Language Models | main | Active | Vision-language alignment;Safety of LVLMs;Toxic Content | applications to computer vision, audio, language, and other modalities | 6;6;6;8 | 3;4;4;3 | 3;3;3;3 | 3;3;3;3 | 3;2;3;3 | 6.5 | 3.5 | 3 | 3 | 2.75 | -0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Same as weakness."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The authors first analyze cause of failure in cross-modal safety transfer. Based on the analysis, they propose Text-Guided Alignment (TGA) to transfer safety mechanisms from text to vision, addressing key safety issues in LVLMs. The analysis is thorough and the proposed method is novel in general.\n2. The paper is well-structured, with clear motivations and systematic explanations of the issues with current vision-language alignment methods.\n3. The proposed approach contributes to improving the robustness of LVLMs. This advancement could be important in bridging safety gaps in multimodal AI."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces the concept of Cross-Modal Safety Mechanism Transfer for Large Vision-Language Models (LVLMs), aiming to transfer the safety mechanism from text to vision without additional visual safety fine-tuning. The current vision-language alignment fails to align vision with text at the hidden states level, leading to unsafe responses for harmful images. The proposed Text-Guided vision-language Alignment (TGA) retrieves relevant texts to guide the alignment of vision input to hidden states in LVLMs. TGA effectively transfers safety mechanisms from text to vision, maintaining safety without compromising general performance in vision tasks, outperforming existing vision-language alignment methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. TGA relies on captions generated by LLaVA-1.5-13B for effective alignment. Inaccurate captions can lead to misalignment between vision and language representations, reducing safety performance. Evaluating the impact of captioning errors and exploring mitigation strategies could add robustness to the approach.\n2. The paper does not adequately show how the model handles unsafe compositional inputs. For instance, an image of a wine bottle combined with text like \"teach a kid to buy this\" represents a harmful query, even though the image and text are safe individually. Evaluating compositional risks more deeply could strengthen safety measures.\n3. The paper does not show the model's robustness against red-teaming methods such as jailbreak attacks. Evaluating how effective the proposed approach is in defending against these attacks would provide more confidence in the model’s safety capabilities in adversarial settings."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please see weakness.\n\n1. In Figure 1, the presentation is somewhat confusing. Specifically, in Figure 1c, could you clarify whether the blue arrows represent \"safe\" or \"unsafe\"?\n\n2. In Section 4, could you specify which layers you are analyzing? For example, are you focusing on the qkv (query, key, value) layers or the projection layers?\n\n3. Can you include any discussion of failure scenarios or bad cases where the method may not perform as expected?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This paper is well-motivated and provides a thorough analysis of layer activations to explain the safety misalignment between vision and language. The work has potential value across multiple related fields, particularly in the design of vision-language models and their safety challenges.\n\nThe method for identifying the layers where the safety mechanism is activated is both reasonable and straightforward, showing effectiveness with a simple approach.\n\nThe proposed TGA alignment method effectively defends against toxic images, with strong evidence presented in Figure 7 to substantiate this claim."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper identifies a vulnerability in current vision-language alignment methods for Large Vision-Language Models (LVLMs), where the safety mechanisms effective for text fail to transfer to visual input, leaving toxic images unchecked. The authors find that misalignment at the specific hidden state layers cause a semantic shift, undermining the safety mechanism for visual input. To address this, they propose a Text-Guided Alignment (TGA) method, which uses related text to guide the projection of visual inputs into hidden states. Experiments demonstrate that TGA successfully transfers text safety mechanisms to vision without additional fine-tuning and maintains overall performance on vision tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper lacks comparisons with other defense methods. Aside from the comparison with the unlearn-FigS defense, the current experimental results are mainly contrasted with the original model. Including comparisons with existing safety defense methods, such as [1-2], would provide stronger evidence of the proposed approach's superiority.\n\nThe presentation is somewhat redundant. For instance, the content in Figures 2 and 4, as well as Figures 3 and 5, could be combined to avoid repetition. Similarly, the writing in Section 4 could be more concise and streamlined for better clarity and flow.\n\n[1] Tovilag: Your Visual-Language Generative Model is Also an Evildoer. EMNLP2023.\n\n[2] Eyes Closed, Safety On: Protecting Multimodal LLMs via Image-to-Text Transformation. ECCV2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Q1. Confused about the number of toxic image-text pairs, in L134 it notes 2031 but in L454, it notes 20531."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Pros:\n\n1. The paper tackles an interesting problem which (to best of my knowledge) isn't very well known in the community. As such, it highlights a potential gap and suggests how to fix new VLMs. \n\n2. The motivation is a bit subtle and it is important to note is mostly relevant for open-source models. In a closed sourced model, one could simply have a nsfw classifier on the image-input. However, for open-source model, such an additional component can be easily turned off. As such, a method to have open-source models which are safe is very important. In that sense, the problem is very well motivated.\n\n3. As part of the experiments, the authors collect new dataset which is always appreciated. The authors further provide qualitative visualizations in appendix.\n\n4. The idea of aligning the hidden states is quite clever in my opinion. \n\n5. The authors compare against multiple baseilnes."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Brief Summary: The paper proposes an interesting observation that the safety mechanism of LLMs in filtering out inappropriate content while answering questions is lost when transferring to VLMs naively. As a result, the VLM might answer about things given the image context even though the corresponding LLM wouldn't have. \n\nThe authors identify that specific hidden layers in Transformers are responsible for this behavior and propose a method called TGA to transfer this mechanism from LLMs to VLMs. \n\nExperiments on multiple benchmarks (like POPE, MMVet) show that the proposed method maintains performance while filtering our inappropriate content."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Cons:\n\n1. One thing that isn't clear to me is if it is possible to reverse the trained safety filter by doing an instruction tuning on a sample of toxic dataset by an end user. In that case, it would be easy to \"jailbreak\" the safe model with relative ease. \n\n2. The authors should include a baseline which works as a direct filter on the image itself to get an upper bound estimate."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- For point 1 in weaknesses, if the alignment method change, or, the key words change, not \"sorry\"/\"apologize\", will the activation layers in Figure 2 change?\n\n- For point 1 in weaknesses, how about the change of activation layers if we do not fully fine-tune all parameters of the model? For example, use PEFT for the pre-trained LLMs or just frozen the pre-trained LLMs. In such cases, will the trained LVLMs still suffer from toxic visual content?\n - If so, will the activation layers remain the same?\n - If not, the reviewer thinks the conclusion only holds for the fully fine-tuning case.\n\n- In Table 2, how about the performance of a safety-aligned LVLMs like that in [1]?\n\n\n- Point 2 in weaknesses, the reviewer thinks analysis about the extra cost is needed.\n\n[1] Federico Bianchi, Mirac Suzgun, Giuseppe Attanasio, Paul Rottger, Dan Jurafsky, Tatsunori Hashimoto, and James Zou. Safety-tuned LLaMAs: Lessons from improving the safety of large language models that follow instructions. 2024."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Clear evidence for the safety activation mechanism.\n\n- Straightforward and well-motivated methods. \n\n- TGA performs relatively well without any post-alignment steps."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper aims to find why LVLMs suffer from toxic visual content when converting LLMs to LVLMs. They observe and analyze the safety activation mechanism in the transformers and develop specific methods, TGA, to alleviate the issue of LVLMS without any post alignment."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- 1. The analysis seems only work with the model developed by [1]. If the aligned models change, will the conclusion remain consistent?\n\n- 2. Lack of analysis about the extra cost.\n\n\n[1] Federico Bianchi, Mirac Suzgun, Giuseppe Attanasio, Paul Rottger, Dan Jurafsky, Tatsunori Hashimoto, and James Zou. Safety-tuned LLaMAs: Lessons from improving the safety of large language models that follow instructions. 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This paper proposes a novel perspective called Cross-Modal Transfer of Safety Mechanism to rethink, explain and address the exacerbated vulnerability of LVLMs to toxic vision compared to toxic text."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024crossmodal,\ntitle={Cross-Modal Safety Mechanism Transfer in Large Vision-Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=45rvZkJbuX},\nnote={under review}\n}"
},
"abstract": {
"value": "Vision-language alignment in Large Vision-Language Models (LVLMs) successfully enables LLMs to understand visual input. However, we find that existing vision-language alignment methods fail to transfer the existing safety mechanism for text in LLMs to vision, which leads to vulnerabilities in toxic image. To explore the cause of this problem, we give the insightful explanation of where and how the safety mechanism of LVLMs operates and conduct comparative analysis between text and vision. We find that the hidden states at the specific transformer layers play a crucial role in the successful activation of safety mechanism, while the vision-language alignment at hidden states level in current methods is insufficient. This results in a semantic shift for input images compared to text in hidden states, therefore misleads the safety mechanism. To address this, we propose a novel Text-Guided vision-language Alignment method (TGA) for LVLMs. TGA retrieves the texts related to input vision and uses them to guide the projection of vision into the hidden states space in LLMs. Experiments show that \\textbf{TGA} not only successfully transfers the safety mechanism for text in basic LLMs to vision in vision-language alignment for LVLMs without any safety fine-tuning on the visual modality but also maintains the general performance on various vision tasks (Safe and Good). Code is in supplemental material and will be released on GitHub after acceptance."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Vision-language alignment",
"Safety of LVLMs",
"Toxic Content"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/3b1a78dc2715df8568019267affcbfaf96f8a2b1.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/cedc979b41ae731cff7c71bea4224be28c612248.zip"
},
"title": {
"value": "Cross-Modal Safety Mechanism Transfer in Large Vision-Language Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
46mbA3vu25 | Does Diffusion Beat GAN in Image Super Resolution? | main | Active | Image Super-Resolution;GANs;Diffusion Models;Generative Models;Deep Learning | generative models | 3;5;5;6 | 4;4;4;2 | 3;3;3;3 | 2;2;2;3 | 3;2;2;2 | 4.75 | 3.5 | 3 | 2.25 | 2.25 | -0.662266 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "“We note that we use semantic image-level captions. Global-level information appears to be not very useful for the SR task” What is the difference between image-level and global-level. Can you give an example? This is unclear.\n\nI have never seen “p-value on SbS comparison” as a way to evaluate using human judgment. Why not just do this quantitatively and threshold the difference between checkpoints? The current way of stopping and evaluating seems incredibly arbitrary and subjective. Can the authors share a few works that use this paradigm? as I have never encountered it. \nE.g. also in “ We conduct an SbS comparison between text-conditioned and unconditional models for both paradigms at all stages of training”\nHow exactly are the GAN models conditioned on the text?\n\nOne major issue is that because of the nature of the SR problem (ill-posed, many to many mapping etc) the types of errors SR models make can inherently invalidate the correctness of the image, e.g. numbers get blurred. It would make the paper significantly stronger if the authors can dig deeper into the entire performance of the SR models for specific tasks as they pertain to downstream tasks and make the evaluation more human interpretable. Right now it is being collapsed into a single number and some qualitative examples. It’s really challenging to make use of the findings in the paper for downstream research/applications. This could significantly enhance the contributions of this work (e.g. by answering *when should one use one paradigm over the other*)\nBut really what are the types of error differences between the two? \nE.g. figure 9 in appendix - the digits or letters. \nE.g. figure 12 in appendix - sometimes diffBIR and real esrgan , the results are switched, one does better than the other\nWe are not really getting conclusive results. Instead mixed findings. It would be great if the authors can taxonimize and dig deeper into when we should use GAN over Diffusion based SR models.\nLine 1505 - figure caption “high frequency”\nNot obvious to me why in section G, “G SUPER RESOLUTION OF SYNTHETIC IMAGES” the authors used those datasets to test for OOD? Why not use real data not synthetic data? Second, what if the synthetic data generated by diffusion models (e.g. SDXL as mentioned in the paper) may actually produce a distribution that is closer to that of the diffusion based SR model, thereby giving the diffusion based model a sort of advantage? \nTable 1 What is the dataset being used? Authors should mention this in the caption of the table."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Overall, this type of work should be appreciated as it probes deeper into what the differences in paradigms are when it comes to the SR task. The authors ensure that the setup for both paradigms is as comparable as possible through the architecture, datasets, etc. In general, the writing is good but some parts were confusing."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper systematically compares GANs and diffusion models for image super-resolution (ISR) under controlled, comparable conditions. The findings reveal that GANs, when trained with similar protocols, can match the quality of diffusion models while offering practical advantages, such as faster training and single-step inference. This study highlights key trade-offs between GAN and diffusion approaches, contributing valuable insights into their performance and efficiency in ISR tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Because their experimental results were sometimes in favor of diffusion, sometimes in favor of GAN, and sometimes no difference was found, I have two suggestions that would significantly improve this paper. First, the authors should taxonomize and explain to the reader when one paradigm should be preferred and in which scenarios. Otherwise, this type of work does not help us improve actual application performance. Second, for the authors to actually be able to make such claims, they need to use more fine-grained evaluation, e.g. the model outputs (SR) images should be used for a specific task like digit recognition, segmentation, etc. However, just comparing single valued PSNR, SSIM, LPIPS etc does not really tell us where these models are outperforming each other. That is also very evident from the qualitative results where within the same figure, both paradigms *visually outperform* each other. \n\n----not necessarily weaknesses----\nTerminology: In Line 49, the term “fairness” might be misleading in this context. Instead, a term like “controlled conditions” or “standardized experimental setup” could better communicate the need for consistent variables, such as dataset size and model complexity, in comparing results.\n- In the related works section, the authors mention conflicting findings about whether text conditioning improves diffusion-based ISR. However, it’s unclear why these differences exist or what insights the current paper offers on this topic. A more thorough discussion or stance on this issue could add depth and relevance.\n- Moving the “variations of SR” subsection earlier in the paper would help readers understand the exact ISR task being investigated, providing important context before diving into the model comparisons.\n- In Line 130, “given a reference on training” is unclear. \n\n- A significant limitation is the use of proprietary models and datasets, making it difficult for others to replicate the experiments. For instance, the use of “internal foundation model” and a “proprietary dataset of 17 million…” lacks important detail. Will this dataset be released? \n\nFigure Captions and Clarity:\nFigures 1 and 4: These figures would benefit from more descriptive captions, highlighting key differences and the main takeaways.\nFigure 4: The meaning of the green and grey indicators should be further clarified, as well as the criteria used to define convergence in the caption. I’m aware it is in the text.\nFigures 2 and 5: Why do these not include the original HR? \nFigure 3: This figure is hard to interpret because it’s unclear what exact quantity or metric is being reported. Should be added to the caption. The corresponding section is also difficult to understand."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Q1. The detail of \"we did not encounter any difficulties with optimization\" --> What can be the reasons for that nice success? Since many works and practices confirm that training GAN is very unstable and mode collapse is a well-known problem of GAN.\n\nQ2. An experiment for the diffusion-based method in this study took 1 month to get the checkpoint, didn't it? Here many experiments were conducted for diffusion, how much time (months) it is estimated to take to complete all of the reported training? This is to provide some information for reproducibility for the community."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "+ Conducting a comparison of GAN and Diffusion-based approaches for Super-Resolution with the same computational resource can provide good insight for the community.\n+ The finding that given the same model size, GAN matches the performance of quality with the diffusion-based method is interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This study challenges the assumption that diffusion-based models inherently outperform GANs in Image Super Resolution (ISR). Noting that diffusion-based ISR models often use larger networks and longer training than GANs, the authors investigate whether these performance gains come from model design or simply increased resources. By controlling for architecture, model size, dataset, and computational budget, they find that GAN-based models can achieve comparable or even superior results. They also examine the influence of factors like text conditioning and augmentation, analyzing their effects on ISR performance and related tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Concerns**\nI believe that the paper needs to have a thorough clarification. Specifically:\n\n+ Claiming that GAN-based and Diffusion-based approaches give the comparison if using the same number of parameters might be relatively strong. It needs very careful investigations and evaluation. Because, if they give comparable performance, the community has no reason to use diffusion with much more cost for both training (much longer) and inference (much more sampling steps). A comparison with the same setup on some widely-used common dataset benchmarks at first might provide some insights and support rather than just collecting some custom massive datasets.\n\n+ Conducting experiments on extremely huge data, i.e. 17 million images is a very high cost. The author could provide a comparison from a small to a larger number of data in their collected data to see the differences between GAN-based and Diffusion-based methods. For example, 100k, 1M, 2M, 5M, 17M, etc pickup of some of these settings might be reasonable to see if the results/findings are consistent. In practice and research, the number of images for the study is often not too huge up to 17M.\n\n+ Were the methods in Table 1, and Table 4 \"SUPIR, RealESRGAN, DiffBIR, ResShift\" trained on the same dataset as the Diff (ours) and GAN (ours)? Also in these tables, it should be better to clearly state which one is GAN-based and diffusion-based would greatly improve readability.\n\n+ Table 1 and Table 8 show that ResShift with about just 1/4 parameters (174M) already outperformed the GAN (ours) and Diffusion (ours) 614 and 630M on PSNR and SSIM. This may raise a question of whether scaling more can bring up the performance or not, which is contradicting as concerned in the paper doubts the performance gain that comes from scaling up model size. \n\n+ Figure 1 and Figure 4 are almost the same and seem to be redundant with no more information added.\n\n**Other suggestions** \n+ For the whole paper, the current form seems to use all \\cite{} making it very messy for all references. I think the use of \\citep{} in latex would produce a more correct presentation of citations for many parts of the paper. Using \\cite{} for cases where the citing author is subject (S), but \\citep{} for other cases when referring to the paper.\n+ Text in figures presented in the paper is too small, e.g. Figure 2, figure 3, figure 5."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Is it possible to do some \"pretraining\" for diffusion-based models similar to the pretraining on L1 for Gan-based method ? So that it can be more fair in evaluation.\n\n2. As I mentioned above in Weaknesses section (bullet point 4), I would like to see how data scaling affect the performance of both GAN-based and diffusion-based ISR models.\n\n3. I know it's a bit infeasible but showing comparison on other restoration tasks like image dehazing, deblurring, .. to see if the same phenonmena happens can be really valuable and strengthen the work."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The main contribution of this paper is a fair comparison between GAN and diffusion-based ISR models, controlling for architecture, dataset size, and computational resources.\n\n2. The authors perform detailed ablation studies, particularly focusing on the effects of pretraining, augmentations, and training with full-resolution images.\n\n3. The paper explores not only the overall performance of the models but also the impact of various design choices such as text conditioning and augmentation, which can be helpful for future work in the field."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates whether diffusion-based models truly outperform GAN-based models for the task of Image Super Resolution (ISR) in a fair setup. The authors conduct a rigorous comparison between diffusion and GAN-based models under controlled experimental settings, ensuring that both model types have equivalent architectures, dataset sizes, and computational budgets. In contrast with common belief, the primary findings reveal that GAN-based models can match or even surpass diffusion models in terms of quality when appropriately scaled. The paper also finds that text conditioning has little effect on model performance, while augmentations can sometimes hurt the training of diffusion models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The contribution is a bit limited as there is no really new and impactful insights presented in this work. Additionally, I'm not sure ICLR is a suitable venue for submitting this work, because it lean toward to more empirical side. May be MLSys is a better venue ? Again, I'm not sure.\n\n2. Even though the experiment setting is quite fair, GAN-based models actually have one additional pretraining stage whereas diffusion model has to be trained from scratch, which could be a reason why diffusion-based model lacks behind GAN-based ones. \n\n3. The authors do not report how the GAN-based model perform without L1 pretraining stage, both qualitatively and quantitatively. Also, claiming that training GAN-based models for ISR does not face instability issues is quite a bold claim because without pretraining on L1 loss, I can imagine that it could be really unstable.\n\n4. Though the paper highlights that both GANs and diffusion models benefit from scaling, it does not investigate how these models scale in terms of data. Like, how much data that GAN-based model starts to outperform diffusion ones. This could help to strengthen the work."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. The authors use Imagen as the architecture for both GAN and diffusion models in their SR experiments. I question whether it is appropriate to use Imagen for the GAN model, given it wasn't designed for GAN-based SR.\n2. My major concern is that the experiments aren't entirely equivalent. Although the authors attempt to balance factors, the GAN model requiring an additional discriminator compared to the diffusion model, the training stability and results heavily depend on this discriminator. It's challenging to ensure true equality between GAN-based methods and diffusion models."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The authors provide valuable insight into the current research trend, highlighting the need for fair comparisons between diffusion and GAN models. \n2. They conduct several experiments, considering computational budget, dataset size, and notably, text conditioning as input prompts.\n3. The authors incorporate human perception to regulate training, ensuring a fair comparison by controlling the training duration.\n4. The experiments are sound, with the authors aiming to maintain consistent conditions, such as using DPM-Solver++ to control the inference steps."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores the challenge of conducting a fair comparison between GAN and diffusion models in image super-resolution. It notes that diffusion models often employ larger networks and longer training times than GANs, raising questions about whether their performance is due to inherent advantages or increased resources. Through controlled comparisons, matching architecture, dataset size, and computational budget, the study observes that GANs can achieve results comparable or superior to diffusion models. It also examines factors like text conditioning and data augmentation, finding that commonly used image captions do not significantly impact performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The presentation of this paper could be enhanced with more figures to better illustrate the concepts and experiments, especially those related to text-conditioning.\n2. The comparison of super-resolved images in the manuscript and supplementary materials lacks numerical metrics, making evaluation challenging.\n3. The current conclusion is somewhat vague. While the authors conduct various ablations between GAN and diffusion, it remains unclear under which specific conditions GAN can truly outperform diffusion, thus it needs to be clarified and streamlined."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "In our work we provide a comparison between diffusion-based and GAN-based image super-resolution under controlled settings and show that GAN-based models can be competitive or even better than diffusion."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024does,\ntitle={Does Diffusion Beat {GAN} in Image Super Resolution?},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=46mbA3vu25},\nnote={under review}\n}"
},
"abstract": {
"value": "There is a prevalent opinion that diffusion-based models outperform GAN-based counterparts in the Image Super Resolution (ISR) problem. However, in most studies, diffusion-based ISR models employ larger networks and are trained longer than the GAN baselines. This raises the question of whether the high performance stems from the superiority of the diffusion paradigm or if it is a consequence of the increased scale and the greater computational resources of the contemporary studies. In our work, we thoroughly compare diffusion-based and GAN-based super resolution models under controlled settings, with both approaches having matched architecture, model and dataset sizes, and computational budget. We show that a GAN-based model can achieve results comparable or superior to a diffusion-based model. Additionally, we explore the impact of popular design choices, such as text conditioning and augmentation on the performance of ISR models, showcasing their effect in several downstream tasks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Image Super-Resolution",
"GANs",
"Diffusion Models",
"Generative Models",
"Deep Learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/e09f78cd45e6576f7c383d952e8ee59820261b57.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Does Diffusion Beat GAN in Image Super Resolution?"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
46tjvA75h6 | No MCMC Teaching For me: Learning Energy-Based Models via Diffusion Synergy | main | Active | energy-based models;generative modeling;sampling;diffusion models | generative models | 3;3;3 | 5;4;4 | 2;2;3 | 2;2;2 | 2;2;3 | 3 | 4.333333 | 2.333333 | 2 | 2.333333 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Given the substantial computational load and potential instability introduced by training an EBM alongside a diffusion model, have you considered alternative strategies to reduce the computational demands, such as truncated or approximate diffusion sampling, without compromising sample quality?\n\nSince your approach integrates a high-capacity diffusion model, could you clarify the unique advantages of training an EBM in tandem? Specifically, how does the EBM contribute to the overall performance compared to using the diffusion model alone for generative tasks?\n\nTo better understand the value of the dual-model approach, would you consider evaluating your method on more complex datasets and comparing it directly against standalone diffusion-based generative models, as well as using samples from your diffusion model as a self-baseline? This would help clarify any performance gains provided by the EBM, particularly on challenging, high-dimensional data where diffusion-only methods may already perform well.\n\nCan the authors clarify what they mean in the experimental section when they refer to Denoising Score Matching? What is the relationship with the sliced score matching mentioned in Figure 3? \n\nAlso, a minor point, why in Fig 3 are the ground truth samples different for the two methodologies?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper's primary strength lies in its innovative approach to training Energy-Based Models (EBMs) without the reliance on Markov Chain Monte Carlo (MCMC) methods, which have known limitations in high-dimensional contexts. Traditional MCMC-based EBM training often suffers from mode collapse, slow mixing, and biased samples, especially with short-run MCMC. By introducing a diffusion-based generative model that jointly trains with the EBM, the authors successfully bypass these challenges. This joint training, which uses divergence between time-reversed diffusion paths as an objective function, eliminates the need for MCMC teaching. As a result, DiffEBM achieves higher sample quality by aligning the generative model directly with the EBM’s learned distribution, making it a valid alternative to MCMC-based methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a novel approach to training Energy-Based Models (EBMs) without relying on Markov Chain Monte Carlo (MCMC) methods, which are traditionally used but can be unstable and biased in high-dimensional settings. The proposed method, referred to as DiffEBM, employs a diffusion-based framework that trains an EBM and a diffusion model simultaneously, effectively eliminating the need for MCMC by leveraging divergence in time-reversed diffusion paths.\n\nThe paper identifies core limitations of MCMC, such as mode collapse and slow mixing, which hinder EBM training. To address these, DiffEBM introduces an objective function to match the EBM and diffusion model, using samples from the latter as unbiased approximations of the data distribution, sidestepping the biases associated with short-run MCMC. The diffusion model is trained using the technique proposed in [Richter & Berner, 2024]. In contrast, the EBM is updated based on synthesized data generated by the diffusion model. \n\nExperimentally, DiffEBM demonstrates superior performance on various benchmarks, including Gaussian mixture datasets and synthetic data distributions like 2Spirals and Swissroll. Performance is evaluated using Sinkhorn distance to compare generated samples to ground-truth distributions.\n\nIn summary, DiffEBM introduces a diffusion-driven training framework for EBMs that enhances efficiency, stability, and sample fidelity by removing MCMC-based sampling, thus providing an alternative pathway for EBM training in complex generative tasks"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The method proposed in this paper, while innovative, introduces significant computational demands that undermine its practical efficiency. The core idea—training an EBM in tandem with a diffusion-based generative model to avoid the pitfalls of MCMC sampling—replaces the complexity of MCMC with an equally demanding requirement: learning a second, paired generative model that must be iteratively updated alongside the EBM. This approach involves repeatedly sampling from the diffusion model during each training step, as highlighted in Algorithm 1, line 223, where a full sequence of diffusion sampling is performed at each iteration. This reliance on diffusion sampling makes the process computationally intensive, as each update to the EBM requires a costly simulation of the diffusion process to produce high-fidelity samples, compounding the training time considerably. Moreover, the iterative nature of sampling across the full diffusion chain can easily lead to instability, especially if the parameters of the generative model diverge from the EBM, creating an oscillating learning dynamic that may fail to converge.\n\nAnother key issue arises from the purpose of training the EBM when the diffusion model, a high-capacity generative framework in its own right, is already optimized to produce accurate samples. If the diffusion model alone can capture the empirical data distribution effectively, as evidenced in the quality of generated samples, the rationale for learning an additional EBM becomes questionable. The diffusion model could theoretically fulfill the generative modeling objective by itself, rendering the EBM redundant for many practical applications. Training both models in parallel may not yield substantial benefits over simply using the diffusion model, especially given the EBM’s limited advantage in scenarios where the diffusion model is already well-aligned with the data distribution. Thus, while the framework’s goal is to leverage the EBM’s interpretability and robustness in capturing complex energy landscapes, the computational cost and redundancy associated with dual-model training suggest a misalignment between the theoretical motivation and the efficiency of the method.\n\nAnother limitation is the lack of direct comparison with standalone diffusion-based generative models, which would offer a fairer baseline for evaluating the proposed approach. Since the method relies heavily on a diffusion model, comparing it against established diffusion-only schemes—or even against samples generated solely by its own diffusion model—would help clarify whether the added complexity of training an EBM provides real benefits. Without such comparisons, it’s uncertain if the dual-model approach improves performance significantly over simpler, diffusion-based methods alone, potentially overestimating its effectiveness. \n\nFinally, in my opinion the considered datasets are too simplistic to claim that the proposed method really has superior performance compared to other schemes."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "The paper is clearly written."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The proposed method is reasonable."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose to replace the traditional MCMC sampling for learning energy-based models (EBMs) with sampling from diffusion models. Generation speed and sample quality are major bottlenecks in learning EBMs, and the experiments show part of those problems are addressed. The used sampling method from EBMs is not novel, as it follows the method from recent work by Richter & Berner (2024). While the proposed method is straightforward and reasonable, its contribution is incremental."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The contribution is not significant. It merely incrementally extends the published sampling method to learning EBMs. If the authors could address a major challenge in applications using the diffusion sampling, the contribution would be more noteworthy.\n\nMinor comment:\nAlthough the equations (8) through (11) were borrowed from previous literature, the authors have to explain those equations in their own words. The provided explanation regarding the diffusion sampling from previous work does not clarify why the proposed sampling should be better than MCMC."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- The sampler is trained using a loss function designed to align the data distribution with the current EBM. While this approach is unbiased when the EBM is well-trained, it can lead to a biased maximum likelihood estimator if the EBM is underfitting, which is common in the early stages of training. It would be great to see how it works without the DSM loss in sampler training.\n- The EBM is trained to match the data distribution and the current sampler. It would also be valuable to see the results when the sampler matching loss is omitted during EBM training."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The proposed method eliminates the need for MCMC. While it involves training an additional diffusion-based sampler, it avoids the bias issues associated with MCMC, provided the sampler is well-trained."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a method for training Energy-Based Models (EBMs) without relying on Markov Chain Monte Carlo (MCMC). In each training step, a diffusion-based sampler is learned to match the current EBM and data distribution. This sampler is then used to generate samples, enabling maximum likelihood training of the EBM. Experimental results on synthetic toy data demonstrate the method's effectiveness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The proposed method is evaluated solely on 2D synthetic data. Testing it on high-dimensional datasets, such as images, would help assess its scalability.\n- There are some missing baselines:\n - Variational Inference: [1] propose to estimate the partition function using variational inference, which is also MCMC-free\n - Noise Contrastive Estimation (NCE) [2]. NCE is MCMC-free and can work very well on 2d density estimation.\n - Energy Discrepancy (ED) [3] is a recently introduced method for training EBMs without MCMC. It offers compelling theoretical guarantees and has demonstrated effectiveness in tasks like density estimation and image modelling.\n\n[1] Duvenaud D, Kelly J, Swersky K, Hashemi M, Norouzi M, Grathwohl W. No MCMC for me: Amortized samplers for fast and stable training of energy-based models. InInternational Conference on Learning Representations (ICLR) 2021.\n\n[2] Gutmann, Michael, and Aapo Hyvärinen. \"Noise-contrastive estimation: A new estimation principle for unnormalized statistical models.\" *Proceedings of the thirteenth international conference on artificial intelligence and statistics*. JMLR Workshop and Conference Proceedings, 2010.\n\n[3] Schröder, Tobias, et al. \"Energy discrepancies: a score-independent loss for energy-based models.\" *Advances in Neural Information Processing Systems* 36 (2024)."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose an innovative MCMC teaching-free framework that jointly trains Energy-Based Models and diffusion-based generative models, significantly enhancing training efficiency and accuracy by eliminating the reliance on biased MCMC samples."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024no,\ntitle={No {MCMC} Teaching For me: Learning Energy-Based Models via Diffusion Synergy},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=46tjvA75h6},\nnote={under review}\n}"
},
"abstract": {
"value": "Markov chain Monte Carlo (MCMC) sampling-based maximum likelihood estimation is a standard approach for training Energy-Based Models (EBMs). However, its effectiveness and training stability in high-dimensional settings remain thorny issues due to challenges like mode collapse and slow mixing of MCMC.\nTo address these limitations, we introduce a novel MCMC teaching-free learning framework that jointly trains an EBM and a diffusion-based generative model, leveraging the variational formulation of divergence between time-reversed diffusion paths. In each iteration, the generator model is trained to align with both the empirical data distribution and the current EBM, bypassing the need for biased MCMC sampling. The EBM is then updated by maximizing the likelihood of the synthesized examples generated through a diffusion generative process that more accurately reflects the EBM’s distribution. Moreover, we propose a novel objective function that further improves EBM learning by minimizing the discrepancy between the EBM and the generative model. Our proposed approach enhances training efficiency and overcomes key challenges associated with traditional MCMC-based methods. Experimental results on generative modeling and likelihood estimation demonstrate the superior performance of our method."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"energy-based models",
"generative modeling",
"sampling",
"diffusion models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/625f6b75bfb0db6aff7d6f91550be826e89d2c27.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "No MCMC Teaching For me: Learning Energy-Based Models via Diffusion Synergy"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
46xYl55hdc | Single-agent Poisoning Attacks Suffice to Ruin Multi-Agent Learning | main | Active | Multi-agent learning;reward poisoning attack;Nash equilibrium;monotone game;convergence;robustness | learning on time series and dynamical systems | 5;6;8;8 | 4;3;3;3 | 4;3;3;3 | 3;3;3;3 | 4;3;4;3 | 6.75 | 3.25 | 3.25 | 3 | 3.5 | -0.777778 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1) Given that the attack model assumes full knowledge of the victim agent’s utility function, do the authors believe that SUSA could still be effective in a limited-information setting? Are there alternative attack strategies that might be feasible with only partial information?\n\n2) The attack model relies on \"strong\" corruption, where the attacker observes the current round action. It would be valuable to investigate whether the results extend to scenarios where the attacker lacks this observational ability, as well as whether it becomes easier to design robust algorithms against such \"weaker\" attackers.\n\n3) To what extent might the findings on NE shifting and efficiency-robustness trade-offs apply to non-monotone games or games with multiple NEs? Could the authors envision scenarios where the attack objective is to guide agents toward a low-utility NE?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1) The paper highlights a significant and underexplored vulnerability in multi-agent learning (MAL), specifically through single-agent utility poisoning attacks. It may stimulate further research into designing MAL algorithms that are robust to adversarial attacks.\n\n2) The authors provide a rigorous theoretical analysis of the Single-agent Utility Shifting Attack (SUSA), clearly outlining the conditions under which SUSA can effectively alter the Nash Equilibrium (NE). The exploration of the efficiency-robustness trade-off is valuable, highlighting the increased vulnerability of faster-converging algorithms.\n\n3) The authors conduct extensive empirical simulations, showcasing the practical impact and effectiveness of their proposed method"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the robustness of multi-agent learning (MAL) algorithms in strongly monotone games with bandit feedback. The authors propose the Single-agent Utility Shifting Attack (SUSA) method to shift the Nash Equilibrium of monotone games by corrupting a single agent's utility feedback, using a sublinear corruption budget relative to the time horizon. Their analysis uncovers a trade-off between efficiency and robustness, showing that faster-converging MAL algorithms are more vulnerable to such attacks. They also validate their theoretical findings via numerical experiments."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) The current attack objective is focused on shifting the Nash Equilibrium (NE) with specific distance guarantees. While the paper briefly discusses steering the NE deviation in a desired direction (lines 295-300), it remains unclear if it is feasible to mislead agents toward specific, predefined strategies.\n2) The study is primarily focused on monotone games, illustrated through Cournot competition and Tullock contests. It would be valuable to examine whether these insights hold in other game-theoretic contexts. For instance, in non-monotone games with multiple NEs, it would be interesting to explore alternative attack objectives, such as guiding agents toward an NE with low utility outcomes.\n3) The paper evaluates the effectiveness of attacks based on NE shift and cumulative budget. Expanding the evaluation to include additional robustness metrics, such as stability and utility outcomes, would provide a more comprehensive understanding of the impact of attacks on MAL."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Minor comments:\n\n1. Line 147, “will useful” → “will be useful”.\n2. Line 242 is a “strong” attack, [Lykouris et al., 2018] is a “medium” attack before observing the action. For strong attack, the related literature has: \n 1. Jun, Kwang-Sung, et al. \"Adversarial attacks on stochastic bandits.\" *Advances in neural information processing systems* 31 (2018).\n 2. Liu, Fang, and Ness Shroff. \"Data poisoning attacks on stochastic bandits.\" *International Conference on Machine Learning*. PMLR, 2019"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper's writing is clear and easy to follow. The figures (especially Figure 1) and remarks help readers understand the paper. \n2. The contributions of the attack policy to a single agent and proving a sublinear attack is theoretically enough (Theorem 1) are novel in the literature. \n3. The discussion of the robustness and efficiency trade-off is very insightful and opens the door for more interesting future works."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper designs the attack policy for multi-agent learning in monotone games. It shows that attacking a single agent is enough to diverge the convergence away from NE. The paper also studies the robustness of the MAL algorithm, presenting several interesting open problems and numerical simulations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. As the authors mentioned that the trade-off had been studied in a single agent, it would be helpful to discuss whether or not the three raised open problems are also present in the single-agent setting. If so, can we extend the single-agent results? If not, why?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Can the authors discuss the attack strategy if the adversary only observes noisy rewards, e.g., sub-gaussian rewards?\n\nIs there any lower bound on the budget that misleads the convergence?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper introduces a novel type of poisoning attack that focuses on a single agent in multi-agent systems, a context underexplored in prior work. The proposed poisoning strategy can mislead any MAL dynamics in strongly monotone games away from the original NE, with a sublinear corruption budget. \n\n2. The authors show a trade-off between convergence speed and the robustness to attacks.\n\n3. The authors provide a thorough theoretical analysis to validate theoretical results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the vulnerability of multi-agent learning (MAL) systems in strongly monotone games with bandit feedback. Specifically, the authors propose an attack strategy called Single-agent Utility Shifting Attack (SUSA), which can steer the learning dynamics away from the Nash equilibrium with a sublinear (w.r.t. T) budget. The authors also provide theoretical and empirical results to validate their points."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "In the attack model, the authors assume full knowledge of the victim agent’s utility function, which may not always be practical in real-world applications. Moreover, since the agent is unaware of their utilities and the adversary has such knowledge which further allows to compute more information such as gradient, misleading the system not to converge the original NE looks not surprising due to the information asymmetry. It could be more interesting to restrict the adversary. For example, the adversary only observes the noisy rewards."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Please try to solve the problems in weaknesses. Besides, there are some extra questions:\n\n* The trade-off between robustness and efficiency is very intriguing. I suggest authors further highlight this part in the next version.\n* In Line 169-170, is the parameter $L$ known to the adversary? How can the adversary select the target agent “smartly”?\n* In Line 295-301, the authors want to show that more information will lead to the ability to control the attack direction. However, without more knowledge (which means more assumptions), this is not addressed. Similarly, in Line 319-322, the same issue arises for the derivative distance.\n* In Proposition 1, what will happen if the target agents are a subset of all agents? I.e., for example, if the attacker can manipulate the utilities of two agents, is there any related result?\n* Minor typo: There is a missing \".\" in Line 390."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "This paper considers an interesting topic: Attacking a single agent in multi-agent learning. The two main contributions are both noteworthy:\n\n* The first main result indicates that to shift the global equilibrium, an attacker only needs to target one agent with sublinear cost. This practical finding provides valuable insights into the potential risks of multi-agent learning. \n\n* Second, the trade-off between efficiency and robustness is very interesting. I believe this will provide some heuristic for the design of robust algorithms.\n\nMoreover, this paper is well-written."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper considers the poisoning attacks in multi-agent learning, targeting a single agent. The authors first propose the attack strategy which steers the game from the NE with sublinear attack cost. They then explore the robustness of learning algorithms, analyzing how the convergence rate can affect the algorithms' robustness. Finally, the experiments verify the theories by showing the derivative and the cost under different parameters."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I have several concerns about this paper:\n\n1. The assumptions underlying the proposed attack are quite strong. (1) As the author admits,the adversary is assumed to have complete knowledge of the victim agent's utility function. This assumption, while useful for theoretical analysis, may not hold in practical scenarios, potentially limiting the contribution. (2) The agent is assumed to be unaware of the attacker's presence. If the agents do know, will the attack still work?\n\n2. There’s no knowledge regarding the difficulty of the problem. In other words, can the authors show any lower bound on the cost, which will further indicate the efficiency of the attack? It is worth noting that in a similar topic which is also mentioned by this paper, $[1]$ has already closed such a gap. Furthermore, Figure 2 illustrates the significance of this concern, as in the last sub-figure, the cost increases at an almost linear rate. What’s more, this $\\alpha$ is determined by the dynamic itself, therefore it cannot be well controlled. Thus, the cumulative cost is quite large to some degree. After all, both $\\log(\\log(T))$ and $T^{1-\\alpha}$ are sublinear, but their outcomes are totally different.\n\n3. The authors may have partially overstated their work. The problem they considered is a specific attack problem in monotone games, however, the title and introduction demonstrate greater ambition (in general games). Whether the results can be generalized remains unknown.\n\n[1] Zuo, S. (2024, April). Near Optimal Adversarial Attacks on Stochastic Bandits and Defenses with Smoothed Responses. In International Conference on Artificial Intelligence and Statistics (pp. 2098-2106). PMLR."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Characterize efficiency and robustness trade-off for multi-agent learning algorithms"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024singleagent,\ntitle={Single-agent Poisoning Attacks Suffice to Ruin Multi-Agent Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=46xYl55hdc},\nnote={under review}\n}"
},
"abstract": {
"value": "We investigate the robustness of multi-agent learning in strongly monotone games with bandit feedback. While previous research has developed learning algorithms that achieve last-iterate convergence to the unique Nash equilibrium (NE) at a polynomial rate, we demonstrate that all such algorithms are vulnerable to adversaries capable of poisoning even a single agent's utility observations. Specifically, we propose an attacking strategy such that for any given time horizon $T$, the adversary can mislead any multi-agent learning algorithm to converge to a point other than the unique NE with a corruption budget that grows sublinearly in $T$. To further understand the inherent robustness of these algorithms, we characterize the fundamental trade-off between convergence speed and the maximum tolerable total utility corruptions for two example algorithms, including the state-of-the-art one. Our theoretical and empirical results reveal an intrinsic efficiency-robustness trade-off: the faster an algorithm converges, the more vulnerable it becomes to utility poisoning attacks. To the best of our knowledge, this is the first work to identify and characterize such a trade-off in the context of multi-agent learning."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Multi-agent learning",
"reward poisoning attack",
"Nash equilibrium",
"monotone game",
"convergence",
"robustness"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b11bfe48813cf05cb317b88730ec3eb22e96b8f4.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on time series and dynamical systems"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/a3c9dd3084ee357c4a9bfdbee6498d9afb2d672b.zip"
},
"title": {
"value": "Single-agent Poisoning Attacks Suffice to Ruin Multi-Agent Learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
473sH8qki8 | Reward as Observation: Learning Reward-based Policies for Rapid Adaptation | main | Active | Reinforcement learning;transfer learning | reinforcement learning | 1;1;3;3 | 5;4;4;4 | 2;1;2;1 | 1;1;2;1 | 4;3;2;3 | 2 | 4.25 | 1.5 | 1.25 | 3 | -0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How big was the demonstration dataset used to train the method?\n2. How were losses balanced? Was RL performed in parallel to supervised learning?"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1. The paper is well written and easy to understand\n2. The method is principled and intuitive. \n3. The authors perform ablations and provide statistically significant results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose a method to learn transferable policies by treating rewards as the observation space on which a learned policy operates. Specifically, they learn behavior policies that utilize as input, a history of observations and actions and generate future actions. These inputs are aggregated using LSTMs which generate actions. This model is trained using PPO online along with an offline behavior cloning loss on a demonstration dataset. The method is evaluated on 3 simple environments - Cartpole, pointmass and Racing Car."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Lack of a justifiable motivation:** It's unclear why one would use only rewards as the observation space in practice. For any complex dynamical system, having a policy dependent on an observation is required. Theoretically, this would only work in stationary bandit-like settings, where a single action optimal action exists from the ones available and a history of past actions and rewards would suffice to take the right action. However, in all other “RL” settings where an action taken changes the state of the world, i.e., environments with a transition function, this method would break. Also, it’s hard to motivate this method from a practical perspective - in most real applications, one would utilize all possible information available to learn a behavior. In short, the authors claim they study a much more complex problem - that of reinforcement learning, while applying a primitive set of assumptions - those of a multiarm bandit. I don’t believe this method would work under the advertised conditions. Additionally, I question the claims of generalization in the paper - but of course, the method would generalize across visual observation perturbations. It is not conditioned on these perturbed inputs. \n2. **Unclear why this works:** I suspect the method is able to solve tasks due to the demonstrations available to it - not due to the online PPO. This is clear in the ablations also - for tasks that actually have a transition function like Car Racing, the demonstrations yield appreciable improvements over not using them. \n3. **Lack of Novelty:** It’s unclear to me whether this paper adds to existing knowledge in any way. Multiarmed Bandits are well studied and memory based agents are well studied also. The paper currently does not present any new theoretical insights, nor does it show massively scaled experimentation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "What are some real applications this method will be useful?"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper is very well-written and very easy to follow and understand."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a method called reward-based policy. It makes the assumption that rewards are observable in a reinforcement learning problem, and attempts to learn a policy that outputs actions based only on previous rewards and actions, and no states or observations. The hope is that such a policy will be transferable between environments where dynamics (and reward functions) are the same."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I think the major problem with the paper is that the studied setting is very unrealistic. The paper provides some discussions, which also acknowledge how difficult it is for their assumptions to be satisfied. At the end, it is still difficult to think of any application where the proposed approach is going to be useful.\n\nSpecifically, I am not worried about the assumption that the reward is observable. But what are some real-world problems / environments where an RL agent can perform reasonably well just by seeing the previous actions and rewards? As the paper acknowledges, this is exactly like trying to play a game by looking only at the score and no other parts of the screen. Other than environments where it is possible to memorize the solution, it seems to me that these conditions are extremely hard to satisfy. And this is not even the only assumption. In addition to these, there should also be some transfer concern where the robot's transition and reward functions stay the same but the observations change. I don't see any useful application.\n\nA somewhat promising part of the paper is the section where it tries to estimate rewards from observations. However, that section requires access to a state-based policy that solves the task. Again, this is very unrealistic. If the problem is a POMDP, how would one have access to a state-based policy? The states are not even known to the agent.\n\nFinally, the main promise of the paper is not very interesting. When the learned policy does not depend on the observations, of course the observation function can be changed in any arbitrary way. Under that setting, that function is completely irrelevant. So the experiments are completely unsurprising and obvious.\n\nBelow are my other comments that are more minor:\n- et al. is plural, so those citations should be thought as \"they\" instead of \"it\". There are some grammatical errors about this.\n- The POMDP definition is missing the function that maps the states to observations.\n- The so called reward-based policy is defined as $R\\times A \\to A$ but my understanding is that a full history of rewards and actions are inputted, not just the most recent ones. In that case, this definition of the policy function is incorrect.\n- Incorrect capitalization in line 248.\n- Again, the second hypothesis/question (line 253) is validated by the definition of reward-based policy. Why is it even a hypothesis?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Q1. How does the reward-based policy compare to model-based methods that learn the dynamics of the environments?\n\nQ2. What’s the broader motivation for this approach, given that it currently only works in very basic environments and requires nearly exponential time as task complexity increases, if it even works at all?\n\nQ3. Based on what does the agent make the first move at the start of an episode before any rewards have been granted?\n\nQ4. Why did the authors choose the color palette swap to introduce the observational shift? Perhaps it should be explained that any change in the observation makes no difference because it is not regarded as part of the input to the model.\n\nQ5. Why not also include an ablation study for the RNN part?\n\nQ6. What do “sufficiently dense” rewards entail for effective training?\n\nQ7. How does the method handle noisy or inconsistent rewards, particularly in real-world applications where feedback may be delayed or imperfect?\n\nQ8. What practical scenarios do you envision where the source and target environments would have identical dynamics, actions, observations, and reward functions?\n\nQ9. How does reliance on expert guidance align with the goal of a “reward-only” approach?\n\nQ10. How would the method handle tasks requiring spatial awareness, such as navigation or manipulation tasks?\n\nQ11. How does the method compare to other established RL techniques?\n\nQ12. What are the quantitative results of the transfer experiments?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "**S1. Thorough Challenge Analysis**: The authors provide a useful in-depth examination of the difficulties inherent to reward-based policies, such as poor observability, difficulty in estimating value functions, and limited exploration. This highlights the limitations of using a scalar reward signal rather than high-dimensional state information. They emphasize the necessity of dense rewards, showing how the method struggles as dimensionality increases.\n\n**S2. Expert Guidance Ablation.** An ablation study on the behavior cloning loss component shows how important it is, especially in the more complex task like Car Racing.\n\n**S3. Simplicity and Comprehensibility.** The method is straightforward, with minimal components and no additional tunable hyperparameters, making it easy to implement and understand.\n\n**S4. Robustness to Observation Quality**: Once trained, the model is independent of observation quality, allowing it to perform well even with noisy or degraded visual observations.\n\n**S5. Effective Reward Estimation for Transfer.** In Section 4.4, the authors demonstrate the transfer capabilities by estimating rewards directly from images, allowing the reward-based policy to perform well even without access to the true reward at inference. They further show that transferring from a 1D reward signal is easier and more reliable than transferring from high-dimensional observations, requiring minimal data and proving more resilient than state estimation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores a reward-only approach to RL, where policies are trained using rewards and actions rather than observational data, enabling potential zero-shot transfer across environments with different visual representations. The authors propose a method using LSTM-based temporal history and expert-guided behavior cloning to learn reward-based policies in simple environments. They demonstrate that reward-only policies perform reasonably well compared to observation-based ones, especially in tasks with consistent dynamics. The method is evaluated in popular tasks like Pointmass, Cartpole, and Car Racing, with 2D-to-3D transfer shown in a reconfigured AirSim environment."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**W1. Numerous Assumptions.** The method makes many assumptions for successful implementation, limiting its adaptability and real-world applicability.\n\n1. **Dense and Accurate Rewards**. The authors state that rewards must be *sufficiently dense* and *give a good value throughout the state space*, yet there is no analysis of how sparse they need to be. This lack of clarity makes it uncertain how sparse or dense rewards can be before training fails. In practice, it likely means that transitions without an informative reward signal contribute minimally to training, making the approach highly sample-inefficient. In realistic settings, feedback is often imperfect or delayed. For example, in robotics, sensors might provide inaccurate readings due to interference or hardware limitations, leading to noisy or inconsistent rewards. Similarly, in environments where rewards are human-generated (like feedback in recommendation systems), subjective or inconsistent responses can introduce noise.\n2. **Domain Compatibility**. The source and target environments should share the same transition dynamics, action and observation spaces, and reward structure. This overlooks many practical cases, where slight discrepancies in dynamics or observation structures are the norm. The authors don’t discuss where these conditions might realistically apply, leaving practical feasibility unexplored.\n3. **Expert Guidance**. The ablation study shows that expert guidance significantly boosts performance, particularly on complex tasks. However, this dependence undermines the claimed benefits of a “reward-only” approach, as it reintroduces standard RL processes the authors aim to bypass. Moreover, training an observation-based expert policy adds a computational overhead, and since training is done online, inference also needs to be run on the expert model. \n\n**W2. Loss of Spatial Context.** By relying solely on rewards and actions, the method lacks spatial awareness, which is critical in tasks requiring an understanding of position or orientation. For example, in navigation tasks like maze-solving, an agent without spatial context will struggle to differentiate between distinct but similarly rewarding areas, such as identical-looking corridors or dead ends. In manipulation tasks, the lack of positional feedback leads to incorrect actions, like reaching for objects without adjusting for their relative location. Without such spatial cues, the method is limited to simple tasks where positioning doesn’t play a role. The agent’s ability to keep the car centered on the track in the Car Racing environment hinges on the overly-engineered reward function that promotes this behavior. Relying on such finely-tuned rewards is impractical in real-world applications, where crafting reward functions to this level of precision is rarely feasible.\n\n**W3. Lack of methodological novelty**. The approach lacks innovation, relying on a simple combination of an LSTM and behavior cloning to train PPO. This simplicity offers little advancement over existing techniques and contributes minimally to the field, as both components are well-established in RL. Moreover, the explanation of the method is scarce. Neither the text nor the caption of Figure 3 adequately explains the diagram, making it difficult to interpret. For example, it’s not immediately evident that the ‘options’ in the testing phase of Figure 3 depict the modes where the environment provides the reward and where the reward is learned.\n\n**W4. Weak Experimental Evaluation.**\n1. The experiments are limited to simple environments (Pointmass, Cartpole, and Car Racing), with the authors suggesting that more complex environments with higher state dimensions or sparse rewards are \"impossible\" for this method. This raises questions about practical utility—if the method can't handle harder, more realistic tasks, its applicability remains unclear.\n2. The authors only compare their reward-based policy to a regular observation-based policy, which is supposed to serve as an upper bound. However, they include no comparisons with other established techniques whatsoever, offering no context on how their approach measures up to alternative methods in this setting.\n3. While evaluation is done over 50 trials, the number of seeds used for training isn’t specified. Without this, it’s hard to assess the robustness of the method.\n4. The authors state that *it is essential that some recurrent network is used*, and that that *a single reward/action pair is not sufficient*. However, they provide no analysis or ablation study to clarify the LSTM’s specific contribution. Alternative approaches for maintaining temporal history, like stacking previous rewards and actions, are not explored, leaving it unclear whether the LSTM is genuinely necessary or if simpler methods could achieve comparable results.\n\n**W5. Transfer Limitations**. The authors attempt 2D-to-3D transfer by using the Car Racing and AirSim environments, as shown in Figure 1, considering a scenario where changes only occur in pixel-based observations. This leads them to circumvent the realistic challenges of transfer by manually reimplementing the kinematics to match across environments. This undermines the notion of true zero-shot transfer, as manual alignment is rarely viable in real-world applications and reveals the method’s limited applicability. Furthermore, no quantitative results are provided for the 3D transfer; it’s merely claimed that the policy shows “reasonable driving performance,” leaving the success of this transfer largely unsubstantiated. The authors’ statement that “our focus is not on dealing with the dynamics shift” dismisses the real complexities of 2D-to-3D transfer, where adapting to dynamic differences is unavoidable. This approach fails to demonstrate genuine transfer."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "n/a"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- **Experimental Clarity**: The illustrative example referenced in Figure 1 is missing from the paper (there is no real-road transfer), which may confuse readers. The authors should replace it with an existing figure from their experiments.\n- **Generalization to Varied MDPs**: Beyond identical observation shifts, how does the reward-conditioned policy perform under broader MDP changes, such as variations in transition dynamics or reward structures?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- **Novel Approach:** The paper presents an innovative method for training policies using only reward and action histories, offering a fresh perspective in reinforcement learning.\n\n- **Intuitive Explanation:** The authors clearly and convincingly explain why reward-based policy learning can be effective, especially in navigation tasks. For example, learning a goal-conditioned policy remains feasible by only incorporating the reward signal."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper tackles the challenge of enabling intelligent agents to generalize to unseen environments by focusing on changes solely in observation spaces. The authors introduce a novel approach that trains policies using only the history of rewards and actions, excluding direct observations. They hypothesize that dense reward signals can facilitate zero-shot transfer across diverse environments. The proposed framework leverages temporal histories of reward-action pairs and expert guidance to train reward-based policies. Experiments in Pointmass, Cartpole, and Car Racing demonstrate that these policies achieve 60% to 90% of the performance of standard observation-based policies while maintaining transferability to environments with significant observation shifts."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- **Limited Real-World Applicability:** Training without direct observations restricts the method’s applicability, as real-world scenarios often involve more complex changes beyond observation shifts. This limitation raises concerns about the generalizability of reward-conditioned policies in diverse settings. What are the possible scenarios in the real world where only observation changes within the MDP? \n- **Scalability Issues:** While the reward-based learning approach shows promise in simpler environments like Cartpole, its effectiveness in more complex tasks such as locomotion, manipulation, or humanoid control remains uncertain. Can reward-based policy handle complex locomotion or manipulation tasks? Since this work focuses on understanding transfer benefits, there is concern that it might struggle to learn necessary behaviors before transfer."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024reward,\ntitle={Reward as Observation: Learning Reward-based Policies for Rapid Adaptation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=473sH8qki8},\nnote={under review}\n}"
},
"abstract": {
"value": "This paper explores a reward-based policy to achieve zero-shot transfer between source and target environments with completely different observation spaces. While humans can demonstrate impressive adaptation capabilities, deep neural network policies often struggle to adapt to a new environment and require a considerable amount of samples for successful transfer. Instead, we propose a novel reward-based policy only conditioned on rewards and actions, enabling zero-shot adaptation to new environments with completely different observations. We discuss the challenges and feasibility of a reward-based policy and then propose a practical algorithm for training. We demonstrate that a reward policy can be trained within three different environments, Pointmass, Cartpole, and 2D Car Racing, and transferred to completely different observations, such as different color palettes or 3D rendering, in a zero-shot manner. We also demonstrate that a reward-based policy can further guide the training of an observation-based policy in the target environment."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Reinforcement learning",
"transfer learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/0d231b53ee9d79dcce4e2197ebda18a953063ca4.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/b2b8b3fe38e91656577ae2c4b6634a5759fb8d62.zip"
},
"title": {
"value": "Reward as Observation: Learning Reward-based Policies for Rapid Adaptation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
47wXbygsvp | TAGA: Self-supervised Learning for Template-free Animatable Gaussian Avatars | main | Withdraw | Template-free avatar;Animatble Avatar;Gaussian Splatting;Self-supervised Learning | applications to computer vision, audio, language, and other modalities | Zhichao Zhai;Guikun Chen;Wenguan Wang;Dong Zheng;Jun Xiao | ~Zhichao_Zhai1;~Guikun_Chen1;~Wenguan_Wang4;~Dong_Zheng4;~Jun_Xiao1 | 5;5;5;5;5 | 5;4;3;4;4 | 2;3;2;2;3 | 3;3;2;2;2 | 3;3;2;2;3 | 5 | 4 | 2.4 | 2.4 | 2.6 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "We would like to formally withdraw our paper from the ICLR 2025 submission process. After thorough consideration, we have decided to revise and improve our work based on new findings and constructive feedback. We believe these revisions will lead to a stronger and more impactful contribution. We are grateful for the time and thoughtful feedback provided by the reviewers and Area Chairs."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": {
"value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors."
}
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See [weaknesses]."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* The proposed method can reconstruct an animatable Gaussian avatar without the need of mesh templates. \n\n* The authors propose a strategy to detect \"ambiguous Gaussians\" that may have unreasonable positions or skinning weights. With such a detection strategy, these ambiguous points can be corrected, leading to a more plausible shape reconstruction."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a method for learning animatable 3D Gaussian avatars from monocular videos. Unlike existing works, the proposed method does not rely on an explicit 3D mesh template. Instead, the Gaussian positions are initialize from a Gaussian distribution around each bone, and the skinning field is initialized and stored in a low-resolution voxel grid. During training, an ambiguous Gaussian correction strategy is introduced to ensure all the Gaussian points have plausible skinning weights and canonical positions. Experiments show that the proposed method is able to reconstruct plausible avatars from monocular videos without any template inputs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The authors did not provide any video results, which makes it difficult to evaluate the animation quality. Providing video results is a common practice in the research field of avatar modeling technology.\n\n* The authors only conducted experiments on human bodies. As a template-free method, it can be applied for other creatures, eg., pigs and dogs. Previous template-free methods like TAVA demonstrated their ability in modeling different creatures in their paper, so I encourage the authors to conduct a similar experiments to better showcase the capability of the proposed method. \n\n* In Abstract, the authors claims a really impressive speedup (\"60x faster in training\" and \"560x faster in rendering\"). However, this advantage is mainly brought by the Gaussian splatting itself, rather than the technical contributions of this paper. Additionally, given that many existing works have already applied Gaussian splatting in the task of avatar modeling, I think the authors should tone down this advantage."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See the weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "This paper presents a novel approach for constructing template-free, animatable human avatars using 3DGS. To address the skinning ambiguities between adjacent body parts, the authors propose to regularize the canonical Gaussian points through a bone-based Gaussian mixture model and an EM optimization algorithm. By treating these Gaussian points as deformation anchors, the canonical geometry can be further refined in a self-supervised manner. The experimental results demonstrate better performance compared to previous template-free methods, both qualitatively and quantitatively.\n\n1. The paper proposes the first template-free Gaussian human avatar.\n2. Technical details are reasonable.\n3. Paper writing is most clear. But insight is not that clear.\n4. Results are mostly reasonable."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a template-free Gaussian avatar. As far as I know, this is the first template-free Gaussian human avatar. The experiment results demonstrate improved performance over those template-free NeRF methods. This is reasonable. However, the paper lacks a deep insight description of the technical contribution. From my point of view, the advantage of template-free property is quite similar with those NeRF-based template-free methods. It would be good to make more clear about why the method can acchieve fast training and high quality rendering, even using a template-based method. From my point of view, template-free alone is not good enough and fast training and high quality rendering are more attractive to me. Also, quantitative experimental results shown in the paper do not validate the improved performance over methods like HumanNeRF. The paper does not provides results on high quality multiview datasets and especially loose cloth human or animals, lacking the validation of template-free benefit."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Lack of video demo to show the performance on dynamic appearance and dynamic details. This is important to see if the results suffer temporal jitter or not. Also, this is important for qualitative evaluation, as figure results can be delibrately choosed from the results. \n2. The results shown in this paper are all tight cloth humans, with loose cloth humans and animals. This could not demonstrate the advantage of template-free method. Is the proposed method fit for loose cloth humans and animals? \n3. The main improvement of this paper is the training speed and the rendering quality, quantitatively compared with other methods like NPC, HumanNeRF, TAVA, as shown in figure 2. However, this performance gain seems to come most from the using of Gaussian representation. Methods using templates and Gaussian splatting have emerged these days. It is better to compared with these method to see the performance differences on training time and rendering quality,especially since the results shown in this paper are tight clothed humans.\n4. Based on 3), I would think if using template and Gaussian splatting, the training time would be further lower, and the rendering quality would be further improved. Is this the truth? Can you discuss the trade-offs between template-free and template-based approaches when combined with Gaussian splatting, specifically in terms of training time and rendering quality.\n5. Qualitative results shown in Fig.5 show that, HumanNeRF is much than the proposed method, as the face and the clothes are much more clear than the proposed method. Can you please address this apparent contradiction and provide more detailed analysis of where their method outperforms HumanNeRF quantitatively despite the qualitative differences.\n6. I would suggest using high quality multiview human performance sequences like ActorShq or Thuman4.0 in the experiment sessions. It would be more clear to see if the results are good or not. I know that the two datasets you used in the paper are widely used in monocular human avatar, but I would still want to know more about the reason of choosing these datasets. Also, is it possible for you to provide your results on ActorShq dataset?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Would it be possible to provide comparisons to ground truth in the visuals?\n- How does the method stack against recent methods such as Animatable Gaussians and ExAvatar?\n- (Minor) Providing a more coherent explanation of why template-free is a critical design choice would be helpful."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "+ Provides a way to learn an animatable avatar without the need of existing mesh templates - which can help simplify overall pipelines. \n+ Method seems to lead to fewer artifacts compared to other template-free methods.\n+ Quantitative results suggest improved performance over other template-free methods. \n+ The training speed is significantly higher than some of the existing methods (although it is probably mostly due to reliance on gaussians as primitives)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work introduces an approach for creating animatable avatars from monocular videos. The key components of the method include: relying on Gaussians attached-to-bones as the rendering representation, voxelized skinning field (which according to the authors provides better generalization over more widely used MLPs), GMM-based technique to fix regions with ambiguous mapping, as well as a backward mapping strategy utilizing the ambiguous gaussian detection. Experimental results on ZJU-Mocap and PeopleSnapshot suggest that method performs better than several other template-free baselines in terms of quality and speed."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- A lot of the claimed improvements - such as speed of training / quality - could actually be largely due to the use of a different representation - gaussian splats - which probably cannot be considered a novel contribution at this point. \n- The method is person-specific, meaning that it requires training per individual, and still requires 30 minutes to train on a video. \n- The GMM-based technique for fixing \"ambiguous guassians\" seems to be very ad-hoc and is not used in a joint optimization with the model parameters.\n- The overall quality is limited - and to judge it fully ideally one needs to provide comparison with ground truth in images, not just competing methods - to understand how well identity is preserved. \n- In some cases it is hard to actually reason about the quality of the method, given the poor quality of the datasets used (in particular ZJU-Mocap). I am not sure if visuals on that dataset are very informative. Also, in that dataset person is rotating 360 views, thus the claim about method being monocular is somewhat weaker. \n- Potential missing comparisons: Animatable Gaussians (CVPR 2024), ExAvatar (ECCV 2024).\n- (Minor) Although authors suggest that being template-free is important - why is it actually important - is not clearly explained in the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The reconstruction results of TAGA in Figure.5 have black pixels around the avatars, TAVA also appears to have such results while other methods don’t. This can lead to worse visual quality. Is it because of the mask or such methods tend to generate such results? The cause of the black pixels and the potential impact need to be addressed."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.\tThe presentation of the paper is clear and easy to follow.\n2.\tThe proposed method achieves significant improvements on the training speed and rendering speed by utilizing the Gaussian representation.\n3.\tThe voxel-based skinning field for forward deformation achieves fairly good and robust results.\n4.\tThe backward-mapping strategy that utilizes anomaly detection can alleviate unrealistic geometric artifacts and demonstrate better visual quality than previous template-free methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a template-free, Gaussian-based method to reconstruct animatable avatars from monocular video input. The proposed method has two core designs to achieve superior visual quality and fast training and real-time rendering. The first core design is a self-supervised method for guiding both geometry and skinning learning. Reconstructing consistent template-free avatars is challenging due to lack of guidance such as predefined shapes or skinning anchors. To address this, TAGA leverages the one-to-one correspondence between canonical and observation spaces. The second core design is the new backward-mapping strategy that integrates anomaly detection to alleviate ambiguous Gaussians that may lead to artifacts. Extensive experiments demonstrate superior visual quality over previous template-free methods while achieving much faster training and rendering."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "From the perspective of human avatar reconstruction, SMPL tends to be a strong prior for high-quality reconstruction. On the other hand, template-free methods are often used to reconstruct complex human avatars like humans with dress or animals (TAVA showcases such results). However, while the method claims template-free as one of the main contributions, the paper does not demonstrate such results. If the scope of experiments lies in the reconstruction of animatable human avatars, then Gaussian-based methods like GART (SMPL-based) has better results regarding visual quality (can be seen in Table.2) and comparable efficiency. This needs to be addressed by the authors in the discussion section. Comparing GART and TAGA on complex avatars with loose clothing or animal subjects can better demonstrate the advantage of template-free methods."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "In addition to the concerns listed in weaknesses, \n* Is the prior still effective when the body parts are close in the monocular video, e.g. in UBC-Fashion, subjects’ arms sticking to the torso? \n* Is TAGA able to learn the correct blend skinning weights for challenging clothes, e.g., dresses?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The paper is well-written and easy to follow. All proposed components are well-motivated and described in detail.\n* It demonstrates the effectiveness of the proposed approach with comprehensive experiments. It outperforms the template-free baselines both quantitatively and qualitatively."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper marries Gaussian splatting with template-free human rendering. To resolve the amplified ambiguities in the template-free setting, it refines the learned blend skinning weights in the observation space using GMM priors and EM. The corrected blend skinning weights are then employed to supervise the Gaussians and blend skinning weights in the canonical space. Extensive results show that the proposed method surpasses the state-of-the-art template-free approaches in terms of rendering quality as well as training and rendering speed."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The benefits of template-free methods are unclear to me due to the following reasons: 1) The authors claimed that template-based methods “require labor-intensive 3D scanning and manual annotation”. However, current human mesh recovery can efficiently and accurately predict SMPL parameters. Template-based methods such as GART and GoMAvatar showcase in-the-wild videos where the human poses and shapes are predicted from off-the-shelf tools such as ReFit [1]. Therefore the templates are not too expensive to acquire. 2) Although the proposed method doesn’t use any priors from human templates, it still requires heavy handcrafted priors to constrain the solutions, for example, the GMM model. 3) Both the training speed and rendering quality of template-free methods still lag behind those of template-based methods. I would appreciate it if the authors could clarify how the manually designed priors in this work outperform the priors derived from human templates. Do the proposed priors offer better generalization to certain scenarios? How does TAGA's performance compare to template-based approaches with predicted SMPL parameters as inputs?\n\n* The method heavily relies on the anomaly Gaussian detection and refinement with GMM priors and EM. The effectiveness of the EM process is not shown in the paper. Could the authors provide an ablation study or visualization showing how the EM process improves the anomaly Gaussian detection and refinement over iterations?\n\n* It is unclear how the method could generalize to in-the-wild scenes where subject masks and poses are predicted and therefore less accurate. Also, the poses in in-the-wild scenes can be more challenging compared to ZJU-MoCap and PeopleSnapshot.\n\n[1] Wang, Yufu, and Kostas Daniilidis. \"Refit: Recurrent fitting network for 3d human recovery.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@misc{\nzhai2024taga,\ntitle={{TAGA}: Self-supervised Learning for Template-free Animatable Gaussian Avatars},\nauthor={Zhichao Zhai and Guikun Chen and Wenguan Wang and Dong Zheng and Jun Xiao},\nyear={2024},\nurl={https://openreview.net/forum?id=47wXbygsvp}\n}"
},
"abstract": {
"value": "Decoupling from customized parametric templates marks an integral leap towards creating fully flexible, animatable avatars. In this work, we introduce TAGA (Template-free Animatable Gaussian Avatars), the first template-free, Gaussian-based solution for the reconstruction of animatable avatars from monocular videos, which offers distinct advantages in fast training and real-time rendering. Constructing template-free avatars is challenging due to the lack of predefined shapes and reliable skinning anchors to ensure consistent geometry and movement. TAGA addresses this by introducing a self-supervised method which guides both geometry and skinning learning leveraging the one-to-one correspondence between canonical and observation spaces. During the forward mapping phase, a voxel-based skinning field is introduced to learn smooth deformations that generalize to unseen poses. However, without template priors, forward mapping often captures spurious correlations of adjacent body parts, leading to unrealistic geometric artifacts in the canonical pose. To alleviate this, we define Gaussians with spurious correlations as \"Ambiguous Gaussians'' and then propose a new backward mapping strategy that integrates anomaly detection to identify and correct Ambiguous Gaussians. Compared to existing state-of-the-art template-free methods, TAGA achieves superior visual fidelity for novel views and poses, while being 60 $\\times$ faster in training (0.5 hours vs 30 hours) and 560 $\\times$ faster in rendering (140 FPS vs 0.25 FPS). Experiments on challenging datasets that possess limited pose diversity further demonstrate TAGA’s robustness and generality. Code will be released."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": {
"value": [
"~Zhichao_Zhai1",
"~Guikun_Chen1",
"~Wenguan_Wang4",
"~Dong_Zheng4",
"~Jun_Xiao1"
]
},
"authors": {
"value": [
"Zhichao Zhai",
"Guikun Chen",
"Wenguan Wang",
"Dong Zheng",
"Jun Xiao"
]
},
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Template-free avatar",
"Animatble Avatar",
"Gaussian Splatting",
"Self-supervised Learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": {
"value": "zhai|taga_selfsupervised_learning_for_templatefree_animatable_gaussian_avatars"
},
"pdf": {
"value": "/pdf/3c399225fdc47ba007c3b3d9459acd6062c05f83.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "TAGA: Self-supervised Learning for Template-free Animatable Gaussian Avatars"
},
"venue": {
"value": "ICLR 2025 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Withdrawn_Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||
48WAZhwHHw | Planning in Natural Language Improves LLM Search for Code Generation | main | Active | LLM;search;inference-time compute;competitive programming;reasoning;code generation;pass@k;diversity | foundation or frontier models, including LLMs | 5;6;10 | 3;3;3 | 3;3;4 | 2;3;4 | 3;3;4 | 7 | 3 | 3.333333 | 3 | 3.333333 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "- > Interestingly, we find that IDEASEARCH performs somewhat better, which we speculate comes from differences in splitting solution sketch into two model responses, instead of doing both chain-of-thought and code solution in one model response.\n\n This is surprising. Is the only difference here that a new model response comes as a new \"message\" in the chat LLM API?"
},
"rating": {
"value": 10
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "There were a great many things I liked about this paper, and I learned a lot from its experiments.\n- Figure 1 is very compelling and sells the main takeaway immediately: Searching over *ideas* in natural language (PlanSearch) is a significantly more effective way to spend inference-time compute than naive repeated sampling of direct solutions.\n- The broad strategy of \"generate plan first then generate the solution based on that\" is highly applicable to other domains; PlanSearch can be used to generate plans for anything, and could plausibly improve performance across many domains besides code generation.\n- Useful takeaway that performance can be seen as a function of diversity of ideas. This is an important lesson for the field which is not currently prioritizing LLMs' diversity of ideas, but should take idea diversity more seriously given this evidence.\n- Interesting to see that instruction-tuned models can sacrifice the diversity of ideas present in base models, Figure 30 is a great figure illustrating this effect! This line was quite surprising to me:\n > in many cases, despite instruction tuned models outperforming base models by large margins on a single sample regime (pass@1), this trend disappears—sometimes even reversing—on a multi-sample regime (pass@k). We refer to Figure 30 as an example of this phenomenon\n\n and this further line in the Conclusion is clear-sighted in pointing out the implication of the problem of losing diversity during post-training:\n > while PLANSEARCH substantially improves diversity over idea space at inference-time, fundamentally, improvements in diversity should also come at the post-training stage\n- Section 3 builds a great foundation to motivate why we care about searching over idea sketches: Starts by considering the correct layer of abstraction to search over, run experiments showing the power of the right \"idea sketch\". Figure 3a and 3b are great.\n- Interesting to see that o1-mini, a model which itself already scales inference-time compute, benefits less from this method (which makes sense, but good to know).\n- Impressive thoroughness in sharing results, with >40 figures throughout the paper and appendices!\n\nOverall, the paper is very well-written, adds new insights to an emerging topic (diversity and search) with important ramifications for the field, and is very thorough with well-designed experiments, with many interesting results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the effect of inference-time search in LLMs applied to contemporary coding benchmarks. The investigations start by asking about the appropriate level of abstraction at which to perform search, and identifies \"natural language plans\" as a useful level through clear experimentation. They introduce a new inference-time search algorithm for LLMs in code generation called \"PlanSearch\". The algorithm appears general enough to be applied to other domains, though the paper does not pursue this. In the main experiments, the authors find that PlanSearch improves LLM performance on coding benchmarks significantly, and by a large margin compared to standard inference-scaling methods (RepeatedSampling). Further experiments identify \"idea diversity\" as the key driving factor for PlanSearch's success, with their custom measures of idea diversity being predictive of downstream performance. The authors further discover that instruction-finetuned models can have less diverse ideas than base model variants, raising important questions around the best way to perform LLM post-training."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Few complaints overall. One minor weakness that doesn't cut against the broader claims and lessons of the paper:\n- The actual search algorithm of PlanSearch in Section 4.3 feels somewhat arbitrary. If the goal is simply to generate diverse plans, I suspect there will be many other different ways to prompt LLMs to generate diverse ideas besides the specific PlanSearch algorithm as described. Did the authors try other algorithms / prompts? Would have been nice to see the failure cases and understand why this specific design was selected.\n - Ablations in Appendix H address this complaint somewhat, but still assumes the same algorithm structure and is mostly just a \"hyperparameter search\" which has small effect on the results.\n - To give a concrete example of what I imagine could be a completely different approach: directly prompting models with previously sampled ideas, and asking models to generate different plans."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weaknesses"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The authors achieve impressive results on three benchmarks, e.g. they make Claude 3.5 Sonnet achieves a pass@200 of 77.0% on LiveCodeBench.\n2. They provide an interesting motivation for their work: showing that idea generation, not idea execution is the bottleneck of LLMs and they the solutions they generate lack diversity. It's interesting that \"a correct sketch is sufficient to produce the correct final solution with relatively high accuracy, even only after 10 tokens of backtranslated solution.\" and that conditioning on a correct idea or bad idea polarizes the score distribution that much.\n3. They provide and elaborate scaffolding for generating solutions.\n4. The conducted experiments are sound. I like showing scaling of pass@k as a function of k. I like that all frontier models as well as open-weight models were used in experiments.\n5. Results are complemented with an interesting analysis of the role of diversity (Fig. 5)\n6. The paper is clearly written"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors describe a new method for code generation (for short problems). It involves generating a set of observations about the problem (in natural language), constructing plans to solve it and then generating solutions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The authors do not compare with several important baselines, e.g. iterative methods such as ReAct, Reflexion and agentic approaches (e.g. AgentCoder). I thinks that a bit weakness: there are relatively simple methods and scale well to more complex tasks (e.g. SWE-Bench).\n2. I don't think it's that surprising or impressive that burning more test-time compute [1, 2, 3] leads to better results. A fair comparison would involve, e.g. a scatter plot of pass@200 on X axis and compute spent on Y axis. Compute spent can be operationalized as either tokens or dollars spent (ideally you'd report both). Then, the question is: is your method strictly optimal? Is it Pareto-optimal? See [1, 2, 3] for a discussion.\n\nMore specific points:\n3. I think the opening sentence of the abstract is false as of November 2024: \"While scaling training compute has led to remarkable improvements in large language models (LLMs), scaling inference compute has not yet yielded analogous gains\". See again [1, 2, 3]. \n4. I don't understand this sentence: \"LLMs as chatbots, in which models are oftentimes optimized to produce a single correct answer (Rafailov et al., 2024; Ouyang et al., 2022).\" RL optimizes for reward (which could be correctness) and DPO optimizes a contrastive loss (e..g. preferring correct responses over incorrect ones). Neither optimizes for a single correct answer. This could be the case for supervised fine-tuning though.\n\n\n[1] Large Language Monkeys: Scaling Inference Compute with Repeated Sampling\n[2] An empirical analysis of compute-optimal inference for problem-solving with language models\n[3] OpenAI O1 system card\n[4] AI Agents That Matter"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the weakness."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is well-written. Analysis and ideas are presented progressively, making it easier to follow. The proposed method is well-motivated by the polarized results of different solution sketches. The experiment results also show clear improvement over the baseline model and methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a new methodology to improve language model test time scaling efficiency. Their study shows that simply sampling more outputs from the same input prompt will have limited diversity, thus limiting the improvement of the model performance. To avoid this issue, they propose to sample the natural language plan of the solution first and then ask the language model to generate a solution based on the plan. They propose an effective method for sampling various plans, and experiments show that the proposed method improved the model accuracy in best-of-n settings."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Although the method shows promising results with models including Claude 3.5 sonnet and GPT-4o, etc., the improvement over o1-mini is marginal. Does this suggest the method is not compatible with other inference-time scaling compute?\n2. While the proposed method shows promising improvement compared to naive sampling and Pass@1, how does it compare to another search-based method, like MCTS?"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Searching over high level plans in natural language rather than directly over code induces diversity in generated outputs, which drastically increases effectiveness of inference-time compute."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024planning,\ntitle={Planning in Natural Language Improves {LLM} Search for Code Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=48WAZhwHHw},\nnote={under review}\n}"
},
"abstract": {
"value": "While scaling training compute has led to remarkable improvements in large language models (LLMs), scaling inference compute has not yet yielded analogous gains. We hypothesize that a core missing component is a lack of diverse LLM outputs, leading to inefficient search due to models repeatedly sampling highly similar, yet incorrect generations. We empirically demonstrate that this lack of diversity can be mitigated by searching over candidate plans for solving a problem in natural language. Based on this insight, we propose PLANSEARCH, a novel search algorithm which shows strong results across HumanEval+, MBPP+, and LiveCodeBench (a contamination-free benchmark for competitive coding). PLANSEARCH generates a diverse set of observations about the problem and uses these observations to construct plans for solving the problem. By searching over plans in natural language rather than directly over code solutions, PLANSEARCH explores a significantly more diverse range of potential solutions compared to baseline search methods. Using PLANSEARCH on top of Claude 3.5 Sonnet achieves a pass@200 of 77.0% on LiveCodeBench, outperforming both the best pass-rate achieved without any search (pass@1 = 41.4%) and using standard repeated sampling on top of existing non-search models (pass@200 = 60.6%). Finally, we show that, across all models, search algorithms, and benchmarks analyzed, we can accurately predict performance gains from search as a function of the diversity over generated ideas."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"LLM",
"search",
"inference-time compute",
"competitive programming",
"reasoning",
"code generation",
"pass@k",
"diversity"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/0e19cecea2117523b14510bb9ca16804f8f2edd8.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Planning in Natural Language Improves LLM Search for Code Generation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
48nAxwEyQ0 | FAVEN: Fast Audio-Visual Embodied Navigation in 3D Environments | main | Active | audio-visual learning;audio-visual navigation | applications to computer vision, audio, language, and other modalities | 1;3;5;5 | 5;4;3;4 | 1;2;3;3 | 2;2;2;2 | 2;1;3;2 | 3.5 | 4 | 2.25 | 2 | 2 | -0.852803 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "I'm happy with the idea and the empirical gains of the proposed approach over prior AudioVisual Navigation methods. However, the paper's clarity is lacking, and the experiments have important limitations. There is also a novelty concern that I would like to be addressed."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "## Well thought-out design of architecture\nThe unimodal components and the fusion mechanisms are well designed and sensible. This is also the first early-fusion approach that I'm aware of for this problem space. Making this work successfully is a good value-add to the community.\n\n## Good experiment results with SoTA\nTables 1 and 2 show comparisons against the prior SoTA methods for audio-visual navigation and FAVEN performs very well, achieving the new SoTA with a good margin. \n\n## Good ablation studies\nThe ablation studies in Tables 3 and 4 addressed any initial concerns I had about the design choices made in the approach section. These are well thought out and executed."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a novel early fusion architecture (FAVEN) for the audio-visual navigation problem. FAVEN consists of unimodal transformer blocks to process visual and audio inputs, and multimodal fusion blocks to cross-attend to information across the two modalities. Results on Matterport3D and Replica benchmarks for Audio-Visual Navigation demonstrate state-of-the-art results. Ablation studies are performed to assess the various design choices and study hyperparameters."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "## Writing clarity\n* L161 - Why late fusion for depth, but early fusion for RGB and audio?\n* Approach clarity\n\t* Eqn 1 - which blocks use this self-attention mechanism? Is it BLK 1, BLK 2, ... from Figure 3? Are there Q, K, V weight matrices applied on the embeddings before performing self-attention?\n\t* What is the flow of information through the architecture in Figure 3? Are equations 2 and 3 happening instead of equation 1? Does equation 5 use the outputs of equations 2 and 3?\n\t* L226 - Does this mean that $\\hat{f}_i^a$ are discarded after each block? Are the same fusion tokens $f_i$ are used as inputs for each block?\n\t* Why is MAMBA needed here? Isn't 392 tokens very small compared to standard LLM applications (e.g., 100k+ tokens) where MAMBA is used?\n* How is model trained? What are the loss functions employed?\n* What is search time? Why is search time improvement of 88% not reflected in navigation metrics?\n## Experimental limitations\n* Section 3.5 - The real-world testing is very limited since it only involves one sound source in one environment for one episode. Moreover, the success / SPL / SNA metrics are not reported for all methods. L319 - 323 - the concluding claims from this experiment are very strong even though the evaluation setting is simplistic and limited.\n* Missing error bars in Tables 1, 2, 3 and 4. Training policies (especially through RL) can be extremely noisy. It is good practice to train policies with multiple random seeds and report the mean and standard deviation to measure the significance of differences in performance. \n* Missing comparison to other fusion mechanisms from VLM literature: This paper proposes one method of fusing information from multiple modalities. However, there are well known approaches to multimodal fusion featuring different levels of fusion (early, mid and late) in the VLM literature. Examples of models: BLIP, Unified IO, Unified IO 2, Flamingo, Chameleon, etc. These have not been qualitatively or quantitatively compared against. Note: I do realize that these have not been directly applied to the Audio-Visual navigation problem, but that does not exempt a comparison to these methods if architecture is a key contribution from the paper.\n## Novelty concerns\n* What is the difference between learnable fusion tokens and [REG] tokens from [Vision Transformers Need Registers](https://arxiv.org/abs/2309.16588)?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Could the authors include additional related works in Section 3.1 on Revisit Audio Visual Navigation to support the claims made between lines 178 and 182?\n\n2. In line 198, the authors mention that as data passes through each transformer block, tokens aggregate modality-specific features. Could the authors qualitatively illustrate how the prompts perform on specific samples, perhaps by visualizing token activations?\n\n3. The “mamba block” is not visible in Figure 2. Could the authors revise Figure 2 to clearly indicate where and how the mamba block is used?\n\n4. Could the authors provide a comparison of the number of parameters in Table 1, as this would help contextualize the efficiency of the proposed model?\n\n5. To further validate the proposed LFT, could the authors conduct an ablation comparing it with traditional fusion methods such as early fusion, late fusion, and cross-attention fusion?\n\n6. The LFT approach seems conceptually related to cross-modal prompts. Could the authors compare their approach with other recent cross-modal prompt methods, such as those by Duan et al. (2024) and Zhai et al. (2024), to clarify differences and relationships? Additionally, would the authors consider discussing these works in the related work section?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper propose a new fusion strategy to achieve more fast audio visual embodied navigation by using learnable tokens. The proposed approach is verified to be effective on diverse datastes.\n\n\n2. The paper is well written and each component is verified to be effective in the corresponding ablation study.\n\n\n3. The method is clearly introduced and easy to understand."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper the authors proposed to use learnable tokens to achieve fast audio visual navigation, where MAMBA model is used to achieve more reasonable feature learning. \nThe authors showcased the performance of the proposed approach in the real world scenarios.\nThe proposed approach is verified to be effective on 2 datasets for 3D environments navigation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. No related works in Revisit Audio Visual Navigation in Section 3.1. The authors are encouraged to add some related works in this subsection to support the claim between line 178 and line 182.\n\n2. On line 198 the authors mentioned here, as data passes through each transformer block, these tokens aggregate essential modality-specific features. Would it be possible to qualitatively showcase the prompts perform on some specific samples (e.g., visualizations of token activations)?\n\n3. The mamba block can not be observed in the Figure 2. The authors are suggested to revise Figure 2 accordingly to indicate where mamba block is used and how to use it.\n\n4. Lack of comparison regarding the number of parameters. Could the authors provide the comparison of number of parameters in Table 1?\n\n5. The authors should conduct another ablation regarding the proposed LFT compared with traditional early fusion, late fusion, cross attention fusion, etc.\n\n6. The LFT seems to be related to cross modal prompts. The authors are suggested to make comparison with the following work (a,b) to justify the difference and relationships. These works are also suggested to be discussed in the related work section.\n\na. Duan, H., Xia, Y., Mingze, Z., Tang, L., Zhu, J., & Zhao, Z. (2024). Cross-modal prompts: Adapting large pre-trained models for audio-visual downstream tasks. Advances in Neural Information Processing Systems, 36.\n\nb. Zhai, Y., Zeng, Y., Huang, Z., Qin, Z., Jin, X., & Cao, D. (2024, March). Multi-Prompts Learning with Cross-Modal Alignment for Attribute-Based Person Re-identification. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 7, pp. 6979-6987)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Many of my questions are included in the Weaknesses above. I hope the authors can respond to them concisely. Additionally,\n\n1.\tWhat is the learnable fusion tokens $\\\\{x^{av}\\_{i}\\\\}^{n_{av}}\\_{i=1}$ in Line 215, and how is this different from the fusion tokens $\\\\{f_{i}\\\\}^{n_{f}}_{i=1}$ in Line 211?\n2.\tThe $x^{a,f}\\_{j}$ and $x^{v,f}\\_{j}$ in Equations (2) and (3) are also quite confusing, concisely, these are just the concatenation of the encoded audio/visual tokens and the fusion tokens, right? \n3.\tFollowing the above question. Where does $X^{a}$ and $X^{v}$ come from? Are they the output of Equation (1)? But in Equation (1), it seems that $X^{a}$ and $X^{v}$ are the inputs to the two intra-modal self-attention.\n4.\tWhat is $\\hat{x}^{av}\\_{i}$ in Line 225?\n5.\tLine 260: where are the “unimodal self-attention transformers” in Figure 1 and Figure 2? Are they exactly the BLKs?\n6.\tThe Fusion Token path in Figure 1 and Figure 2 is quite misleading because audio and visual tokens are passed to the MM BLKs for interaction, but there is no arrow in the figure indicating this.\n7.\tThe number of tokens symbols $n_{av}$, $n_{a}$, $n_{v}$, and $n_{f}$ in Section 3 and Table 4 are inconsistent; please clarify. \n8.\tThe proposed pipeline also applies depth features, why the depth features are not considered in feature fusion? Any experiment to justify this?\n\nSuggestions:\n\n1.\tClearly define all symbols and notations and be consistent, e.g., $X^{av}$ and $X^{a}_{f}$ both mean a concatenated token sequence, but the latter is written as superscript and subscript.\n2.\tDon’t repeat simple and repetitive expressions, e.g., the softmax-attention formulation in Equations (1), (4), and (6).\n3.\tUse consistent names for model components, e.g., the “multi-modal blocks” and the “context interaction block”.\n4.\tMany descriptive statements can be made less repetitive and much more concise, e.g., Line 137-140, Line 172-175, Line 178-190, and Line 263-269 have very similar contents.\n5.\tFigure 3 seems like a very repetitive and not informative illustration as Figure 2. I suggest removing it.\n6.\tOverall, the writing of this paper looks very hasty to me. I sincerely suggest the authors carefully polish the paper and clearly all the items I listed above."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.\tThe paper explores an early fusion method for the visual and audio features, whose integration is crucial in addressing the audio-visual navigation task. This idea is well-motivated.\n2.\tExperiments on the benchmark Matterport and Replica datasets significantly outperform existing approaches, demonstrating the effectiveness of the proposed method. \n3.\tI am happy to see that the paper performs real-world experiments; it is a bonus to the paper.\n4.\tThe overall idea and method are simple but effective, which is likely to impact future works on relevant problems."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes an early fusion architecture for integrating audio and visual features to address the audio-visual embodied navigation problem. Specifically, learnable fusion tokens and cross-attention-based multi-modal interaction blocks are introduced to achieve this. The paper also attempts a Mamba-based fusion block. Experiments on the benchmarking Matterport3D dataset and Replica dataset are performed. A real-world experiment is also conducted."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe unclear presentation of the information flow in the proposed model, especially the ordering/input/output of modules described in Section 3.2 and Section 3.3, and how they correspond to the schematics in Figure 2 and Figure 3, is very confusing. \n - I hope the authors can elaborate on the exact operations in BLKs and MM-BLKs and respond to my questions about the Equations listed in the Questions section below.\n - Other components, such as the visual encoders, GRU, Linear layer, Action and Critic modules in the system, and training objectives applied in this paper, are also unclear. I presume the authors follow some previous implementations, but it is important to clarify the entire system.\n2.\tThe experiments presented in this paper are very shallow; this paper focuses on early fusion; hence, more in-depth experiments should be conducted on this point, including exploring other design alternatives and studying how exactly the fusion design influences the agent behavior.\n - If my understanding is correct, the proposed fusion and interaction are essentially cross-attention from visual/audio to fusion tokens and vice versa. I wonder if the authors have attempted a more efficient and deeply-bound approach by feeding all visual and audio tokens into a single multi-modal transformer for feature fusion. \n - There has no strict comparison to mid-level or late fusion based on the same pipeline.\n3.\tThe authors highlight Mamba-based fusion blocks as one of the key contributions, but there is no information provided on the configuration of SSM or exactly how it is used to process the visual-audio tokens.\n - The authors mentioned an input sequence length of 392, which is very small, and Mamba should not show any speed advantage according to the Mamba literature and my personal experience.\n - Table 3 presents a speed advantage by comparing Mamba and softmax-attention; I suspect that the softmax-attention is a raw pytorch implementation without any CUDA optimization, such as Flash-Attention-2. In this case, the comparison is unfair to me because Mamba is nicely hardware-optimized. \n - Additionally, how to use the sequential SSM to process multimodal non-causal visual tokens is a long-lasting research problem; it is very surprising to me that using Mamba gets better results. Again, there is no explanation of the implementation in this paper. I hope the authors can clarify more. \n4.\tThe paper claims a significant speed advantage (88.8% decrease in search time) compared to previous approaches. However, it is unclear whether the efficiency comes from the proposed fusion method (the agent runs fewer steps) or because the model runs faster. It is unclear what the architectural and inference differences are between FAVEN and the existing works. It is also unclear what the hardware is when comparing the processing speed. Overall, there is too little supporting information for the papers to make such a claim. \n5.\tI appreciate the real-world experiments presented in Section 3.5, however,\n - Is there any physical robot running in the real world? I am too confused by only viewing the supplemental video to understand how the entire system works. What exactly makes the control and motion?\n - I think the discussion in this section is severely overclaimed. This is only a single example, and there is no comparison to any other methods in this real-world setting."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1.\tThe structure of this paper is unclear. For example, the authors present the real-world experiments in section 3.5, and detailed the experiments in simulated environments in section 4. It is recommended that all experiments be grouped together in a single section. In particular, it is advised that the real-world experiments be placed later in the paper, following the presentation of the simulated results. \n2.\tIt seems that the authors conducted only one experiment in one real-world scenario (apartment). This is insufficient to prove the generalization ability of the proposed method in the real world. To achieve this, it is recommended that multiple experiments be conducted in a variety of real-world environments such as offices, classrooms, and other similar settings. Furthermore, it is also suggested that the methodology employed in the episode's production be elucidated in detail. This should include the method used to determine the robot's initial position and the target position, the distance threshold between the initial and target positions, and other relevant information.\n3.\tAs the method proposed in this paper is for embodied audio-visual navigation, what is the configuration of the robot and the sensors (e.g., the model of the robot, the configurations of the RGB-D camera and the microphone arrays), as well as other relevant information (e.g. the sampling rate of the audio, the resolution of the images) in the real-world experiments? The video provided by the authors in the supplementary material appears to be captured by an individual using a mobile phone rather than by a camera mounted on a robot.\n4.\tAs for the experiments on navigation efficiency, the authors said that their model achieved an up to 88.8% decrease in search time on the Replica dataset. As far as I know, the commonly used metrics on navigation efficiency are SPL, SNA and NA, as presented in Table 1 of this paper. What is the definition of the search time here and how the 88.8% calculated?\n5.\tThe contribution of this paper is weak. In my opinion, the authors have implemented a few modifications to the observation encoder from Av-Nav. In particular, the authors replaced the audio encoder and RGB-encoder from CNN to transformer and incorporated a Mamba module for multimodal feature fusion. The article is devoid of innovation approaches to map construction, waypoint prediction, or navigation decision-making. Furthermore, the ablation experiments in this paper are insufficient. In particular, the authors conducted ablation experiments solely on Replica dataset, yet lacked experiments on Matterport3D simulated environments and real-world environments. Additionally, in the ablation study (Table 3), it is unclear which scenario was used to calculate the search time metric. Was it heard or unheard?"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The authors introduce early fusion into audio-visual navigation by deploying a Mamba block to construct the relationship between the visual and audio embeddings. This approach has been shown to improve the success rate and efficiency of navigation to a certain extent."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors present a novel architectural approach to audio-visual navigation in three-dimensional environments. This method performs early fusion by combining audio and visual observations into tokens, thereby correlating information from both modalities to enhance the efficacy of the decision-making process. The effectiveness of this method is demonstrated by experiments conducted on Replica and Matterport3D simulation environments."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe structure of this paper is unclear and difficult to follow.\n2.\tThe number of real-world experiments is insufficient to prove the effectiveness of the proposed method. \n3.\tThe configurations of the real-world experiments are lacking.\n4.\tThe comparative experiments on navigation efficiency is unclear. \n5.\tThe contribution of this paper is weak, only the replacement of the audio and visual encoder of the AvNav, as well as the introduction of the Mamba block for early fusion."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "In this paper, we introduce FavEN, a novel transformer and mamba architecture that combines audio and visual data into early fusion tokens for embodied navigation.."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024faven,\ntitle={{FAVEN}: Fast Audio-Visual Embodied Navigation in 3D Environments},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=48nAxwEyQ0},\nnote={under review}\n}"
},
"abstract": {
"value": "Achieving fast audio-visual embodied navigation in 3D environments is still a challenging problem. Existing methods typically rely on separate audio and visual data processing merged in late stages, leading to suboptimal path planning and increased time to locate targets. In this paper, we introduce FavEN, a novel transformer and mamba architecture that combines audio and visual data into $\\textit{early fusion}$ tokens. These tokens are passed through the entire network from the initial layer on and cross-attend to both data modalities. The effect of our early fusion approach is that the network can correlate information from the two data modalities from the get-go, which vastly improves its downstream navigation performance. We demonstrate this empirically through experimental results on the Replica and Matterport3D benchmarks. Furthermore, for the first time, we demonstrate the effectiveness of early fusion in improving the path search speed of audio-visual embodied navigation systems in real-world settings. Across various benchmarks, in comparison to previous approaches, FavEN reduces the search time by 93.6\\% and improves the SPL metrics by 10.4 and 6.5 on heard and unheard sounds."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"audio-visual learning",
"audio-visual navigation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/65f88cde1678ce0df95a393623ed91cbc9f27201.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/e758c785b413fdac9d47d2f817856959eb64a7f7.zip"
},
"title": {
"value": "FAVEN: Fast Audio-Visual Embodied Navigation in 3D Environments"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
49fIu0yDJ4 | Knowledge Benchmark Graph: Assisting Large Language Models in Designing Models by Retrieving Benchmark Knowledge | main | Active | Knowledge Graph;Auto Machine Learning | transfer learning, meta learning, and lifelong learning | 5;5;6;8 | 3;2;4;3 | 3;2;3;3 | 2;3;3;3 | 3;2;3;3 | 6 | 3 | 2.75 | 2.75 | 2.75 | 0.288675 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "please refer to weaknesses"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "This paper has several notable strengths that enhance its contribution to the field of automated machine learning.\n\n1. The introduction of a comprehensive graph dataset models the relationships between datasets, models, and performance. This structured resource simplifies the model selection process for researchers and practitioners.\n2. The theoretical framework is well-articulated and provides a solid basis for the proposed methods. This enhances the credibility of the approach and demonstrates a deep understanding of the principles involved.\n3. The experiments conducted are thorough and well-executed, testing the methods across various datasets. These results provide strong empirical support for the authors’ theoretical claims.\n4. The research has significant implications for automated machine learning (AutoML), allowing for the automatic identification of optimal model architectures. This capability can reduce the time and expertise required for model design, making machine learning more accessible.\n\nOverall, the paper effectively combines a valuable dataset, strong theoretical foundations, and solid experimental validation, positioning it as a promising contribution to AutoML. Its findings could lead to further advancements in automated processes for model development."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a graph dataset that helps connect datasets, models, and model performance, making it easier for machine learning systems to automatically find the best model architecture for a specific dataset. Since real-world datasets are often new and unseen, the authors create a method to measure how relevant different datasets are to each other, which helps in sharing knowledge between them. This method allows the system to use information from existing benchmark data, ensuring that high-performing models can still be applied to new datasets. Additionally, the authors present a new metric that focuses on the most useful insights, which makes the model selection process even better. In their experiments, they test this approach on various datasets to show how effective and efficient it is, highlighting its potential to improve model design and performance in real-world situations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The theoretical explanations in the paper could be improved with additional background to aid reader comprehension. \n1. For example, in Definition 1, it would be useful for the authors to provide an overview of existing problem formulations in model transfer or AutoML to better contextualize their approach. \n2. Explaining the motivation for using a probability lower bound in Definition 1 and its relevance to practical model transfer would clarify this choice. \n3. It would also be helpful to indicate whether this problem formulation is novel or based on existing methods, and if it is novel, to discuss the advantages it brings over previous formulations.\n\nIn Section 4.3, the intuition behind the transferability score could be further clarified. \n1. A conceptual explanation of what the transferability score represents in practical terms would be beneficial, along with a small example or illustration to demonstrate how it is calculated and interpreted. \n2. Additionally, comparing this score with existing metrics for evaluating model transfer effectiveness could further clarify its utility.\n\nFurthermore, the paper’s discussion on integrating Large Language Models (LLMs) into the proposed framework could be more comprehensive, as it is currently quite brief. \n1. The authors might expand on the specific role of LLMs in their approach, detailing how they interact with the Knowledge Benchmark Graph and contribute to model selection or adaptation. \n2. Examples illustrating the LLMs' role in the process would be helpful, as well as a discussion of any potential challenges in integrating LLMs and how these are addressed. \n3. Lastly, comparing this approach to other recent methods that incorporate LLMs for AutoML or model selection would provide a useful context for the reader."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. LLM assisted method achieves best performance but there is no detailed explanation or design. For example, you can answer the following questions:\n (a)What prompts or instructions were given to the LLM?\n (b)How were the LLM's outputs processed or integrated into the model selection process?\n (c)Were there any constraints or filtering applied to the LLM's suggestions?\n2. It's good to include more complicated design of the two scores. For example, incorporate edge-level information in data similarity score calculation and model architectural information, like the number of convolutional layers, in model relevance calculation."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper writing is clear. The formulation is very straight-forward. The authors use data similarity score and model relevance score to infer the best potential model on unseen data. \nThe authors provide extensive experiments comparing to SOTA methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors proposes a new solution for AutoML. The authors design a knowledge graph that contains information on data, model and performance. With the knowledge graph, the method uses similarity score of data and relevance score of model to suggest best model on unseen data. \n\nIn experiments, the authors construct the knowledge graph with graph datasets and GNN architectures. The result show that the method author proposes achieves the best result on 3/8 tasks. However, with the assistance of LLM, the model is able to achieve best result on 5/8 tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The data similarity score the authors propose is too simple. The authors focus their study on GNN, but the similarity score does not involve any relational information on the edges. \n2. The model relevance score is confusing. The name sounds like it look for architectural similarity between models. But in reality it's the model's historical performance.\n2. The experiment section that involves LLM is very vague to me. There's is not explanation on exactly how LLM infer or select the models.\n\nIn summary, the method basically proposes models based on the following two ideas: a) if the node features are similar 2) if the model has historical performance."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the weakness."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The idea is interesting and the motivation is convincing. \nThe implementation of this idea is relatively complete, including the graph construction process, the design of enhancing generalization ability on incorporating unseen datasets, and the retrieval mechanism of existing model candidates."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper attempts to construct a graph that stores all the existing datasets, models, and model performance on datasets for future research and development endeavors. Based on the constructed datasets, the authors try to propose some metrics to evaluate the similarity between datasets and the effectiveness of the models retrieved on an unseen dataset. The experimental results show that such a graph is benefical for the development of AutoML."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The work is full of engingeering skills while lack some academic insights. For example, controling and varying the hyperparameters (e.g., delta, the size of datasets, epsilon, etc.) bring limited insight. Perhaps a case study is required to illustrate how the algorithm succeeds to retrieve a good model according to a given unseen-yet-similar dataset. There should be more deeper insights and factors beyond the similarity of datasets, such as the underlying common research issues. What features should the algorithm capture and consider? \nThe scenario is relatively limited. Authors conduct experiments merely in GNN domain. It’s unclear whether such an effort could generlize to other ML methods, which makes the contribution of this paper vague. I suggest conducting a small amount of experimental evidence to demonstrate the generalization ability of this work, which could make it more promising and convincing."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "None"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Which LLM do you employ in the experiments? I can not get the corresponding information after reading Section 5.1\n- The naming of knowledge benchmark graph might be modified. I think the current name misleads people into thinking this is a new KG benchmark.\n- See Weaknesses"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The idea of this paper of paper is more innovative, combining Knowledge Grap, LLM and AutoML.\n- The authors have done sufficient data preparation, method design and experiments around this idea."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "- This paper proposes a novel AutoML framework with LLM for GNN design. In this paper, the authors construct a knowledge benchmark graph to inform the LLM with more domain knowledge about GNN architecture and design new metrics to guide the knowledge selection."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- There are some labeling errors in Table2, e.g., the optimal result on the Citeseer dataset appears on the GAT but is not bold.\n- The experiments in the current article stop at the GNN domain and are only oriented to the node classification task, which has some limitations in the scope of application. And the title gives the impression that the authors' approach is oriented to a variety of tasks in generalized scenarios. I think KBG can be considered to be applied on top of more heterogeneous tasks, such as other tasks in the field of graph learning, or even out-of-domain experiments such as CV/NLP that require the GNN method to verify the effectiveness of the method. Further, the design of the model can also be not limited to the GNN model."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024knowledge,\ntitle={Knowledge Benchmark Graph: Assisting Large Language Models in Designing Models by Retrieving Benchmark Knowledge},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=49fIu0yDJ4},\nnote={under review}\n}"
},
"abstract": {
"value": "In recent years, the design and transfer of neural network models have been widely studied due to their exceptional performance and capabilities. However, the complex nature of datasets and the vast architecture space pose significant challenges for both manual and automated algorithms in creating high-performance models. Inspired by researchers who design, train, and document the performance of various models across different datasets, this paper introduces a novel schema that transforms the benchmark data into a Knowledge Benchmark Graph (KBG), which primarily stores the facts in the form of performance(data, model). Constructing the KBG facilitates the structured storage of design knowledge, aiding subsequent model design and transfer. However, it is a non-trivial task to retrieve or design suitable neural networks based on the KBG, as real-world data are often off the records. To tackle this challenge, we propose transferring existing models stored in KBG by establishing correlations between unseen and previously seen datasets. Given that measuring dataset similarity is a complex and open-ended issue, we explore the potential for evaluating the correctness of the similarity function. Then, we further integrate the KBG with Large Language Models (LLMs), assisting LLMs to think and retrieve existing model knowledge in a manner akin to humans when designing or transferring models. We demonstrate our method specifically in the context of Graph Neural Network (GNN) architecture design, constructing a KBG (with 26,206 models, 211,669 performance records, and 2,540,064 facts) and validating the effectiveness of leveraging the KBG to promote GNN architecture design."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Knowledge Graph",
"Auto Machine Learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/35ac9aae19c8ff34e408e335214a76569e38ba25.pdf"
},
"presentation": null,
"primary_area": {
"value": "transfer learning, meta learning, and lifelong learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Knowledge Benchmark Graph: Assisting Large Language Models in Designing Models by Retrieving Benchmark Knowledge"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
49jkevjF6x | Multlingual Abstractive Event Extraction for the Real World | main | Active | dataset;event extraction;multilingual;zero-shot;entity linking | datasets and benchmarks | 1;3;3;5 | 2;4;4;4 | 1;2;2;3 | 1;2;1;2 | 1;2;4;3 | 3 | 3.5 | 2 | 1.5 | 2.5 | 0.816497 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. As mentioned in Line 248~250, the main event exluding historical events. Does it means that when applying the system in the wild, the system can only be used in extracting \"novel\" events?\n\n2. In the system, an 'up-to-date' domain-specific and complete entity lists are required. How does this becomes available in real application?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Multilingual event resources are very rare, especially for under-representative languages.\n2. The discovery and attempt of re-defining event extraction task is very important to the research community."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper claims a new design of a new task called Abstractive Event Extraction, and repurpuses an existing resource ACLED to create a dataset Lemonade for the task. Lemonade covers 16 languages, including various under-representative languages such as Burmese, Indonesian, Nepali, etc. The key difference between the new task with traditional Event Extraction task in that (1) the arguments are linked to an pre-defined domain-entity-base, rather than an text span; (2) the do not focus on extracting event trigger words. For modeling, they build a system Zest and achives 57.2% F1 with zero-shot performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The key claim of the paper is to have a new tool for real-world EE, and thus propose the new Abstractive event extraction problem. However, the author lack the evidence to prove the how these new paradigm of task is more helpful than classical definition of event extraction. The discussion on different use cases should be included.\n\n2. I think the claimed \"novel/new\" Abstractive event extraction problem is simply the previous event extraction annotation paradigm plus a \"entity-linking\" filtering on argument lists. For example, in Figure 1, the entities \"two communties\" can be and should be disambuigated by havinge an entity-linker after the traditional event extraction system. And this practice has been used in the information extraction communities for a long time. For the two event or a single event case in Figure 1, \"event coreference system\" are designed for this purpose. For these point of view, I do not see the claimed \"Abstractive event extraction\" is a new task.\n\n3. Several of the design choice are arguable:\n- Why single event for each writing is enough? If we consider diverse potential downstream applications, such as event interactions, plot understanding. Single event is far from useful.\n- It is unclear to me whether intermediate annotations SHOULD be included or excluded for annotation. I think in many previous efforts, like ACE. These intermediate annotations is very important for (a.) guarantee the annotation quality (b.) evaluating step-by-step performance. In this paper, I do not see how missing these annotations influence the annotation quality and the corresponding IAA. I think this paper neglect the potential ambuiguity done by ACLED dataset."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "No ethics concern."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "To the ICLR community: as a community we will at some point have to address the problem of the zero entry bar for an astronomical amount of submissions, where automatically hopefully the scientific rigor and the existence of hypothesis testing are checked, before coming to the reviewers. Take this submission as an example - what hypothesis is the submission even testing?"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "Very detailed description of the Lemonade dataset developed for the AEE task and the ZEST system for providing a baseline for it."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presented a dataset called Lemonade (Large Expert-annotated Multilingual Ontology-Normalized Abstractive Dataset of Events) for benchmarking performance on abstractive event extraction which they abbreviated as AEE. And then the paper presented a system which they call ZEST, which is a zero-shot system for AEE, which serves as a baseline for the Lemonade dataset they came up with."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Didn't really show much why Lemonade is useful and stands out among other such datasets, or show how ZEST is a performant system for AEE, as the results are not a prevailing sweep and the baseline LLaMAX is not a widely recognized system for such a task."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See above"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper provides a comprehensive schema for the abstractive event extraction task and, at the same time, offers a high-quality dataset for this task.\n2. The authors also provide a robust framework for abstractive event extraction."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a approach to event extraction, known as Abstractive Event Extraction (AEE), which moves beyond traditional mention-level annotations to capture a deeper understanding of events in text. They present a multilingual dataset, LEMONADE, covering 16 languages and annotated by experts for real-world socio-political event tracking. The study also introduces ZEST, a zero-shot AEE system that performs well without training data, achieving a 57.2% F1 score, and a supervised model that achieves 71.6% F1. These approaches aim to enhance event extraction in low-resource languages and complex socio-political domains"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "This paper contributes to event extraction by defining a schema for abstractive event extraction and creating a high-quality dataset. However, from my perspective, it has several weaknesses.\n\n1. First, while the defined schema is general, it lacks specificity. For example, arguments such as \"entity,\" \"group_1,\" and \"group_2\" are extracted without a precise argument type, which may limit practicality in real-world applications. A more useful approach could involve defining argument types as an open-set extraction task, where argument types are inferred from the context rather than using general labels.\n\n2. Second, the authors discuss some challenges in current work, such as Entity Normalization and Linking. However, not all challenges are thoroughly addressed in this paper. For instance, event coreference resolution is mentioned but not actually covered, which raises reader expectations without fully addressing the challenge.\n\n3. Finally, as a potential regular paper at ICLR, the methodology feels relatively weak aside from the dataset contribution. The framework design is straightforward, which diminishes the overall impact of the paper's contributions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Need to analyze the impact for different event types and arguments.\n2. The paper lacks in-depth experimental analysis, such as error analysis and case studies.\n3. When constructing the training set, do the authors consider the impact of different data scales on the results?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This work offers a multilingual event extraction dataset.\n2. This work links the entities in the text to the corresponding entity database.\n3. This work provides precise annotation of specific location information."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper works on the event extraction task. Based on existing dataset ACLED, the authors construct a multilingual event extraction dataset LEMONADE. They conduct experiments on both open-source and closed-source large language models, achieving meaningful results with F1 score of 71.6%."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. This work constructs an event extraction dataset but does not provide detailed data analysis, such as the distribution of event types across different languages, argument distribution, document count, and entity distribution.\n2. The experimental section is insufficient and needs additional experiments to verify the effectiveness of the proposed dataset.\n3. Table 4 shows an uneven distribution of event types. Is this factor considered in the experiments?\n4. Table 2 presents the overall results, but how do the results vary across different event types and languages?\n4. Supervised experiments based on large language models are crucial, and the authors should focus on this aspect in the methodology section."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We present a new abstractive formulation for the event extraction task, along with a new dataset covering 16 languages and a novel zero-shot system for it."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024multlingual,\ntitle={Multlingual Abstractive Event Extraction for the Real World},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=49jkevjF6x},\nnote={under review}\n}"
},
"abstract": {
"value": "Event extraction (EE) is a valuable tool for making sense of large amounts of unstructured data, with a wide range of real-world applications, from studying disease outbreaks to monitoring political violence. Current EE systems rely on cumbersome mention-level annotations, and event arguments are frequently restricted to ungrounded spans of text, which hinders the aggregation and analysis of extracted events. In this paper, we define a new abstractive event extraction (AEE) task that moves away from the surface form and instead requires a deeper\nwholistic understanding of the input text. To support research in this direction, we release a new multilingual, expert-annotated event dataset called Lemonade, which covers 16 languages, including several for which no event dataset currently exists. Lemonade has 41,148 events, and is based on the Armed Conflict Location and Event Data Project, which has been collecting and coding data on political violence around the globe for over a decade. We introduce a novel zero-shot AEE system Zest that achieves a score of 57.2% F1 on Lemonade. With our supervised model that achieves 71.6% F1, they represent strong baselines for this new dataset."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"dataset",
"event extraction",
"multilingual",
"zero-shot",
"entity linking"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/3fd7411a770aaa0c89dc1b56a27fb46e76db8519.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/7dabc6b2f593494fd0265a280213ebba52493c5c.zip"
},
"title": {
"value": "Multlingual Abstractive Event Extraction for the Real World"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
49qqV4NTdy | Understanding Alignment in Multimodal LLMs: A Comprehensive Study | main | Active | foundation models;multimodal llm;alignment;image understanding | foundation or frontier models, including LLMs | 6;6;8 | 3;3;3 | 3;3;3 | 2;2;3 | 3;3;3 | 6.666667 | 3 | 3 | 2.333333 | 3 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Will the code be open-sourced to facilitate further research in this area?\n2. How does the proposed approach ensure that the distribution of generated hallucination data aligns with real-world hallucination data distributions?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper introduces a unique approach to generate preference data for MLLMs by utilizing model biases without human or external model annotations.\n2. The paper provides empirical analysis, comparing BDHS with other alignment methods across multiple benchmarks, highlighting its effectiveness and resource efficiency in aligning MLLMs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper addresses challenges in aligning MLLMs with human preferences to improve response accuracy and reduce hallucinations. It reviews various offline and online alignment strategies, including DPO and RLHF, and introduces BDHS. BDHS generates preference data without human annotation, leveraging model-inherent biases to enhance performance cost-effectively. Results indicate BDHS is competitive with established preference datasets, demonstrating its potential as a lightweight alternative to traditional alignment approaches for MLLMs, especially in tasks requiring high fidelity between visual inputs and textual responses."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The proposed data sampling approach partially mitigates hallucination issues in MLLMs but does not completely resolve them.\n2. The BDHS method's dependency on hyperparameters, such as mask thresholds, could affect reproducibility across different model implementations."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1、The DHS method induces hallucinations by performing attentional masking in the latent space. Is it possible that this strategy could affect the sensitivity of the model to critical details in the image? Have ablation experiments been performed to quantify the effect of this attentional masking in scenes of varying visual complexity? In addition, how to select the range of attention masking, and whether the alignment effect can be optimized by dynamic adjustment?\n\n2、The paper mentions filtering out different non-preferred responses by semantic similarity score. For this filtering mechanism, is it possible that there is a bias that makes the model perform better or worse on specific types of semantic content? Have comparative experiments with different similarity scoring models been conducted to confirm the robustness of the selection mechanism? Furthermore, could this similarity score lead to a tendency for models to oversimplify when faced with less common or more complex visual scenes?\n\n3、Does the performance of the BDHS method on the LLaVA 1.6-7B model generalize to larger or smaller model sizes? Have any experiments been conducted on models with different parameter numbers to explore whether this approach exhibits different advantages or disadvantages depending on the model size? Especially on small-scale models, is it possible that the method effect is not significant due to parameter limitations?\n\n4、To what extent do current hallucina-evaluation benchmarks such as POPE and MMHALBench-V truly reflect model performance in real-world applications?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1、Comprehensive Analysis: The paper provides a detailed comparison of alignment methods, including offline and online strategies, and evaluates their effectiveness using diverse datasets.\n\n\n2、Novel Data Generation Method: The introduction of BDHS offers a cost-effective alternative to traditional alignment approaches, reducing the need for human annotation or external supervision while maintaining competitive performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates preference alignment techniques for Multimodal Large Language Models (MLLMs), focusing on how they address hallucinations, which occur when models produce responses not grounded in visual inputs. The study categorizes alignment methods into offline and online approaches and examines various multimodal preference datasets. The authors propose a novel data generation method called Bias-Driven Hallucination Sampling (BDHS), which does not require human annotations or external models. Experimental results demonstrate BDHS’s effectiveness compared to more resource-intensive methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1、Clarification of Methodological Choices: It would be helpful to better understand why specific thresholds and parameters were chosen for BDHS, such as the similarity score threshold and masking strategy.\n\n\n2、Generalizability of BDHS: It remains unclear whether BDHS can be effectively applied to models beyond the specific ones studied. Further discussion on its applicability to other MLLMs or domains would strengthen the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. What are the effects of scaling up BDHS in terms of data size or complexity on model performance?\n2. What specific modifications could be made to BDHS to achieve state-of-the-art results on key benchmarks?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The study systematically compares offline and online alignment methods, examining their impact on model performance across various metrics like hallucination reduction and response quality.\n2. BDHS presents a low-cost, innovative solution to generate preference data, showing competitive results against other data-heavy methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper explores preference alignment for improving Multimodal Large Language Models (MLLMs), specifically focusing on reducing hallucinations and increasing alignment between model outputs and image content. It provides a thorough analysis of various alignment methods and introduces a novel approach, Bias-Driven Hallucination Sampling (BDHS), which effectively generates preference data without human annotation or external models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. While the paper examines alignment techniques and datasets, it does not clearly articulate the primary findings from these investigations, which can make it challenging for readers to grasp the significance and implications of the study\n\n2. BDHS demonstrates promising results; however, its effectiveness may differ across various MLLMs and visual tasks. Conducting additional experiments with diverse model architectures would bolster claims regarding its generalizability."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A study on the effect of different alignment methods and public preference datasets on the performance of multimodal llms"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024understanding,\ntitle={Understanding Alignment in Multimodal {LLM}s: A Comprehensive Study},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=49qqV4NTdy},\nnote={under review}\n}"
},
"abstract": {
"value": "Preference alignment has become a crucial component in enhancing the performance of Large Language Models (LLMs), yet its impact in Multimodal Large Language Models (MLLMs) remains comparatively underexplored. Similar to language models, MLLMs for image understanding tasks encounter challenges like hallucination. In MLLMs, hallucination can occur not only by stating incorrect facts but also by producing responses that are inconsistent with the image content. A primary objective of alignment for MLLMs is to encourage these models to align responses more closely with image information. Recently, multiple works have introduced preference datasets for MLLMs and examined different alignment methods, including Direct Preference Optimization (DPO) and Proximal Policy Optimization (PPO). However, due to variations in datasets, base model types, and alignment methods, it remains unclear which specific elements contribute most significantly to the reported improvements in these works. In this paper, we independently analyze each aspect of preference alignment in MLLMs. We start by categorizing the alignment algorithms into two groups, offline (such as DPO), and online (such as online-DPO), and show that combining offline and online methods can improve the performance of the model in certain scenarios. \nWe review a variety of published multimodal preference datasets and discuss how the details of their construction impact model performance. Based on these insights, we introduce a novel way of creating multimodal preference data called Bias-Driven Hallucination Sampling (BDHS) that needs neither additional annotation nor external models, and show that it can achieve competitive performance to previously published alignment work for multimodal models across a range of benchmarks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"foundation models",
"multimodal llm",
"alignment",
"image understanding"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/d386f09089ba396461b8aa5f28bcf3f842676325.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Understanding Alignment in Multimodal LLMs: A Comprehensive Study"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
49ti6LOUw5 | UnoLoRA: Single Low-Rank Adaptation for Efficient Multitask Fine-tuning | main | Active | lora;multi-task learning;peft | other topics in machine learning (i.e., none of the above) | 3;3;3;3 | 3;4;3;4 | 3;1;2;2 | 2;2;2;2 | 2;1;1;2 | 3 | 3.5 | 2 | 2 | 1.5 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Why not use a self-implemented LoRA in both multi-task and single-task scenarios, since LoRA is relatively simple to implement?\n- Is there a detailed efficiency analysis available?\n- How to acquire the task embeddings in the paper?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper conducts comprehensive experiments and analysis to verify the proposed method.\n- The paper is well structured, proposing an architecture, UnoLoRA, which integrates a shared hypernetwork that generates task-specific scaling factors."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents UnoLoRA, an approach for parameter-efficient multitask learning in large language models (LLMs) using a single Low-Rank Adaptation (LoRA) module shared across multiple tasks. Building upon LoRA as an implicit regularizer, the authors explore its application in a multitasking context, aiming to reduce the number of trainable parameters while maintaining competitive performance. The paper introduces an architecture, UnoLoRA, which integrates a shared hypernetwork that generates task-specific scaling factors."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The experiments are conducted on T5-series models, which are from 4 years ago. Using a more recent model doesn't necessarily mean aiming for the current SOTA (state-of-the-art), but rather that the behaviors of stronger models might differ, making experiments on T5 impractical. For instance, current models, after instruction tuning, demonstrate strong zero-shot generalization across tasks, making multi-task learning less important.\n- In the first table, the method proposed in this paper does not outperform HyperFormer++, even though they have different amounts of training parameters, the average effectiveness is also quite lacking. Therefore, the experimental results of this paper are not very convincing."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "- What is the relationship between Figure 2 and Figure 1? Which part of Figure 1 is the Shared Hypernetwork shown in Figure 2?\n- For different tasks, does UnoLoRA only change the task embedding and keep the other parts shared between different tasks?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The authors conduct in-depth analyses of LoRA matrices in both single-task and multitask settings, highlighting distinctions in their properties (like effective rank and Frobenius norm) and the roles of A and B matrices. Visualizations like PCA further illustrate how UnoLoRA efficiently manages task-shared and task-specific information.\n- The study’s experiments on the GLUE benchmark provide extensive evidence of UnoLoRA's effectiveness and competitive performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces UnoLoRA, a method for parameter-efficient multitask fine-tuning of large language models (LLMs) through a shared Low-Rank Adaptation (LoRA) module. UnoLoRA leverages LoRA's implicit regularization properties to facilitate multitask learning by using a single adapter shared across all tasks, instead of separate adapters for each task. This approach drastically reduces trainable parameters to 0.05% per task while maintaining competitive performance with existing multitask methods. The model is evaluated on the GLUE benchmark and demonstrates parameter efficiency and improved generalization by capturing both shared and task-specific information. The authors further refine their method with UnoLoRA⋆, which converges faster and performs better in early training stages compared to the initial UnoLoRA."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- For the experiments on the GLUE benchmark, no repeated experiments with different random seeds were performed, and the experimental results are not completely convincing due to the randomness.\n- Only the T5-base model was used for the experiment. The effectiveness of the method was not verified on larger or smaller models, nor on decoder-only models."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "- How does performance change as rank is varied?\n- How do individual components of the method affect performance?\n- What is UnoLoRA$^*$?\n- Is there a task-specific $A$ matrix or not?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "- Simple and seemingly effective way of parameterizing low-rank adapters in the multitask setting. The idea is timely---there have been a lot of improvements in LoRA and related schemes in the last couple of years, and revisiting conditional computation + adapter combinations seems like a promising direction."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a method called UnoLoRA, a procedure for constructing\nlow-rank Transformer adapters in a multi-task setting by training a network to\napply task-specific transformations to a shared adapter. In particular, while\nstandard LoRA parameterizes weight matrices as $W + AB^\\top$ for low-rank\n$A$ and $B$, UnoLoRA parameterizes them as $W + A ~\\mathrm{diag}(H(t)) ~ B^\\top$,\nwhere $t$ is a task representation that includes both a discrete identifier\nand example data and positional embeddings, and $H$ is a hypernetwork. A similar recipe was previously\nexplored by Karimi Mahabadi et al. (2021) under the name of \"HyperFormers\"; as\nfar as I can tell, the main differences are that:\n\n- HyperFormers condition only on task IDs, while UnoLoRA conditions on example\n input data\n\n- HyperFormers also modulate LayerNorm parameters, and not just adapters\n\n- HyperFormers use a slightly different adapter parameterization from the modern LoRA recipe, with\n a nonlinearity in the middle"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Comparatively minor tweak of an existing idea. This wouldn't be an issue on\n its own, except for the fact that the various changes are not evaluated in\n a way that enables direct comparison to HyperFormers, as described below.\n\n- Inconsistencies and missing details in the description of the method. Fig 1\n makes reference to a \"Task-specific A\" parameter that is not mentioned\n anywhere in the formal description of the method---is it used, and if so,\n where? Additionally, the experiments make reference to a method called\n UnoLoRA$^*$, which achieves slightly better performance than the base method\n but does not appear to be described anywhere.\n\n- Major issues in evaluation. The paper's main results are summarized in Fig\n 6(a), which show that UnoLoRA and HyperFormers both pareto-dominate training\n separate adapters for each task---UnoLoRA involves fewer parameters at the\n same level of performance, while HyperFormers give increased accuracy but are\n slightly less parameter-efficient than UnoLoRA. I have two concerns here.\n\n - First, the individual differences between UnoLoRA and HyperFormers are\n never individually evaluated, making it impossible to figure which (if any)\n are responsible for the performance differences.\n\n - Second, and more fundamentally---the whole point of adapter-based methods\n is that they provide a tunable parameter (the adapter rank) that trades\n off between accuracy and parameter count. So what we really need to see\n is the entire accuracy / efficiency curve for both model classes, rather\n than an arbitrary point on each. In fact, if I understand correctly,\n even the size of the adapter is totally incomparable between the two\n models being compared: this paper trains UnoLoRA with a rank of 8, while\n the results copied from the HyperFormers paper appear to use a rank of 24.\n\n Without a minimal comparison (or a complete frontier from each model), it is\n possible that all observed differences between methods result from\n incomparable hyperparameter choices.\n\n- Major formatting issues: nearly every citation in the paper is incorrectly formatted (using \\citet instead of \\citep). It seems likely that this paper didn't receive even a single round of proofreading, and should not have been submitted to ICLR in its current form."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- What is the difference between the UNOLora* and UNOLoRA? I haven't found the method difference in your paper?\n\n- It required a comparation to use LoRA to multi task training.\n\n- It is not clear why cross task relation is related to the capability of using LoRA to do multi-task learning."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The method proposed by the authors is simple but effective."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This article proposes a new method called UNOLORA, which utilizes shared low-rank adaptation (LoRA) modules to achieve efficient multi-task learning for large language models, and has achieved outstanding performance on the GLUE benchmark."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The writing and presentation is not good, for example, the caption and figure of Figure 1 seems confusing. Also the font size in the figure is too small to understand.\n\n- The training of Shared Hypernetwork will introduce additional training cost.\n\n- The method is only evaluated on one model, without scaling up the model size/architecture."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Insights into using a single LoRA adapter for multi-task learning, and the actual low-rank representations and how they generalise across tasks."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024unolora,\ntitle={UnoLo{RA}: Single Low-Rank Adaptation for Efficient Multitask Fine-tuning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=49ti6LOUw5},\nnote={under review}\n}"
},
"abstract": {
"value": "Recent research has demonstrated the efficacy of Low-Rank Adaptation (LoRA) as an effective implicit regularizer for large language models. Building on these findings, we investigate whether LoRA can be leveraged for efficient multi-task learning. This study presents experimental observations on utilizing a single LoRA module for multiple tasks in the fine-tuning of large language models. We introduce UnoLoRA*, a novel method for multi-task finetuning, which significantly reduces trainable parameters to just 0.05% per task. Our approach not only uncovers insights into low-rank representations and multitask generalization but also explores LoRA’s capacity to capture task-agnostic knowledge. Our findings affirm that sharing a single LoRA adapter effectively boosts parameter efficiency while ensuring that it learns a more general representation, even as it yields a competitive performance."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"lora",
"multi-task learning",
"peft"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/7e18e358c4407cd3e105afb322c16b1906e7d895.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/f8b0189085c6e4f002734a5e206d5f2d6847b02b.zip"
},
"title": {
"value": "UnoLoRA: Single Low-Rank Adaptation for Efficient Multitask Fine-tuning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
49v8meXjHS | $k$NN Attention Demystified: A Theoretical Exploration for Scalable Transformers | main | Active | efficient transformers;self-attention mechanism;sublinear algorithms;sampling;k-nearest neighbors | foundation or frontier models, including LLMs | 3;5;5;8;8 | 3;3;3;4;4 | 2;3;2;4;3 | 2;2;3;4;3 | 2;2;1;4;3 | 5.8 | 3.4 | 2.8 | 2.8 | 2.4 | 0.926367 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see weaknesses section."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper focuses on the important and relevant topic of the quadratic complexity of attention, given the ever-increasing context length in LLMs, and provides a rigorous theoretical analysis behind the empirical performance of kNN attention.\nThe paper is generally well-written, easy to read, and the ideas are clearly organized and discussed throughout. I enjoyed reading it, and liked how the authors first decompose the attention matrix as expectations over softmax, then use median-of-means boosting to achieve high-probability approximation bounds and runtime complexity bounds. The use of the Gumbel-Max trick and the concentration properties of the Gumbel distribution is also cute, ultimately leading to the gain over quadratic attention.\n\nThe use of 1-step random walks over the transition matrix P to approximate matrix-vector products (giving attention gradients in this case), although known, is also pretty nice. Overall, I appreciate how different well-known ideas are effectively combined."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper theoretically analyzes kNN attention, which improves on the quadratic complexity (with respect to context length, $n$) of traditional full attention. The authors do this by viewing the self-attention matrix as an expectation over multiple softmax distributions, and then use the Gumbel-Max trick, along with the concentration properties of the Gumbel distribution, to efficiently approximate the original self-attention. This lazy Gumbel sampling, combined with k-MIPS, results in a total runtime of $O(n^{1.5})$. Additionally, the work approximates the attention gradients using length-1 random walk sequences, reducing the naive complexity from $O(n^2d)$ to $O(nd^2)$, providing high-probability approximation bounds."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I don’t have many weaknesses to point out, but I do have some questions/points I’d like to raise/discuss:\n\n\n—The experiments in Figure 3b) show that for $k>n^{⅛}$, kNN attention performs quite well, and there doesn’t seem to be much need for $n^{1/2}$ (as suggested in your theory). I understand you’ve mentioned this as potential future work in the conclusion, but why do you think this is the case? As far as I understand, the choice of $k=\\sqrt{n}$ in your theory arises because you want to balance the samples outside $S_i$, which could, in expectation, ruin the top score after the addition of Gumbel noise, and the accuracy of lazy Gumbel sampling. This factor of $n/k$ also appears in the discussion in (Routing Transformers, Roy et al.), where the complexity is $O(nkd+n^2d/k)$, and $\\sqrt{n}$ is the optimal choice. What do you think explains this significant gap?\n\n\n—(Related to the above) For kNN attention without median-of-means (Sec 2.3), you randomly sample outside the top $k$ similarity products and upweigh them to capture the tail of interactions, and this is the algorithm used in the experiments. Median-of-means doesn’t consider the tail at all. Do you think capturing the tail behavior is critical to $k≪\\sqrt{n}$ performing well?\n\n—Regarding the experiments in Section 4.2: The true benefit in the backward pass should only show up with large $n$. I understand that training larger models is difficult, but it would be interesting to see what happens with a moderate $n∼1000$ when training with cross-entropy.\n\n—What is $n$ for the final set of experiments (perplexity and approximation error)? Also, for the mean error of the kNN ($k$ vs. $n$) experiment, what is the range of $n$? I couldn’t find these details in the appendix.\n\n\n—Minor point: There is no figure number for the figure at the top of page 9, which I believe should be Figure 3. Please fix this and the rest."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please see the questions in the previous part. \n\nIn addition: \n\n- Regarding the conditions in Theorem 8: are these practically achievable? For instance, can $n$, $d$, $T$, and $k$ be expected to yield satisfying approximation errors?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The problem studied in the paper is well defined in the abstract. \n- Understanding the approximation abilities of sparse attention models, of which kNN attention are a special case, is a significant research question. \n- Similarly, proposing alternatives to computationally costly gradient computations in Transformers could have a strong impact in the community.\n- The code is provided which enables to reproduce the experimental results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The submission presents a theoretical framework for *k*-nearest neighbors (kNN) attention, an alternative to standard self-attention where each token attends to only the *k* \"closest\" tokens instead of all tokens. Additionally, the authors propose approximation methods for self-attention gradients using kNN search.\n\nMore precisely, the submission is organized as follows:\n\n- In part 1, the theoretical results are outlined in brief.\n- Part 2 focuses on kNN attention, introduced as an approximation algorithm. In Section 2.1, self-attention is reformulated as an expectation, and Theorem 4 presents the primary approximation results, comparing the outputs of true self-attention and kNN attention, both of dimension $n \\times d$.\n - Section 2.2 discusses efficient sampling from the softmax distribution (with each empirical distribution corresponding to a row of the attention matrix) via Gumbel sampling.\n - Section 2.3 introduces an alternative method for computing kNN attention outputs, designed to be more compatible with modern hardware.\n- Part 3 describes randomized algorithms to approximate self-attention gradients (with derivations in the appendix). These estimations leverage random walk simulations, and Theorem 10 provides a theoretical runtime analysis.\n- Part 4 shows experimental results on kNN attention’s efficiency in terms of space and time costs (Figure (a)) and shows approximation error as a function of k (Figure (b)). Figure 3 compares learning curves for standard gradients obtained using backpropagation with those of the proposed approximation in part 3. Figure 4 evaluates perplexity and approximation error on real-world data using nanoGPT."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I do not believe this paper to be ready to be published at ICLR for the following reasons:\n\nOverall clarity:\n\nThe paper is generally quite difficult to follow. Importantly, it is difficult to understand what is new in the paper. A clear statement of the contributions would be helpful. \n\nI am adding specific suggestions / questions for improvement below:\n\n- It should be clearly mentioned that Part 1 sketches the results that are later detailed in Parts 2 and 3. \n- Whenever a theorem is stated, a *detailed* proof should be attached to it, even if it is in the appendix. I could not find a proof for each theorem (for instance, theorem 10). Please let me know if I am wrong. \n- Lemma 1 is hard to understand as is. What is a multiplicative estimate ? If this is detailed in App A, then the remark in l. 77 should be added before the lemma. \n- Line 60 is unclear. For a reader encountering this sentence for the first time, the phrase \"how to extend the method to approximate the backward pass\" is confusing. Please clarify why this is important and what connection it has to kNN attention.\n- Mathematically, the paper lacks rigor. For instance, in line 104, is this an assumption on the differentiability of $\\phi$, or is it just notation? Also, where are the precise assumptions on the norms of $Q$, $K$, and $V$ stated? These should appear immediately following the theorem.\n- The source of the probability $1 - \\delta$ in all the theorems (e.g., line 92) is unclear. Please specify the source of randomness; is it due to sampling over the softmax-induced distribution?\n- In line 129, what does $T$ represent?\n- In line 120, the term $k_k$ in equation (3) is confusing.\n- Similarly, equation (5) is unclear because it takes the expectation with respect to $k$, yet there is an index $k$ in the sum. This could be clarified.\n\nTheoretical Contributions:\n\n- In general, I find the results challenging to interpret. How do these results compare to previous work? What are typical values?\n- Presenting attention as an expectation of the value matrix is not new. For instance, see *Attention: Marginal Probability is All You Need* (https://arxiv.org/abs/2304.04556). As written, it seems the submission presents this as a novel contribution, which is misleading.\n- Line 181: this is a proof sketch, not a full proof. I could not find the complete proof in the appendix.\n- The proof of Theorem 4 is also only a sketch. For instance, see the last two lines.\n\nExperiments:\n\n- There should be experiments specifically validating the theoretical bounds. As is, the experiments are rather qualitative.\n- I had to make some manual adjustments to get the provided code running. Including a `setup.py` would be helpful.\n- I ran your code for the gradient approximation in Figure 3. On my laptop, the approximated gradient takes approximately 100 times longer to compute than standard gradients. Did you observe similar behavior? In which settings do you expect it to offer speed advantages?\n- Similarly, I profiled the code from Appendix G against standard attention on a single GPU, using $B$, $H$, $N$, $D = 256$, $8$, $500$, $32$, and $k = 5$. I observed a runtime of 0.5831 seconds for kNN Attention versus 0.0061 seconds for Traditional Attention. If you do not observe similar results, could you provide code showing that your method leads to a speed advantage?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "NA"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1) Could you please clarify the theoretical novelty? For example:\n- Is Theorem 4 a direct consequence of Median-Of-Means Boosting?\n- Does Theorem 8 (main result) rely heavily on Mussmann et al. (2017)?\n\n2) Where is the proof of Theorem 2? Is it a direct consequence of another specific theorem? \n3) In Theorem 4, what does $O(T)$ refer to? A discussion on the different parameters in the theorem is needed. \n4) In Theorem 28, how large can $\\rho$ be? Does it approach 1? \n5) Where is the proof of Theorem 29? Is it a direct consequence of (Alman & Song, 2024a; b)? \n\n6) For notation, please clarify Gumbel $(a, b)$ and Bin $(a, b)$. \n7) In my opinion, the statement and proof of Theorem 5 are vague. Are you trying to say there exists an index \\(\\hat{j}\\) that gives the maximum, and can you provide a proof for this? \n8) In Lines 184–189, what is the Moment Generating Function of the Gumbel distribution used in the proof of Lemma 6? \n9) Where is the proof of Theorem 7? What do you mean by \"it is easy to derive from Algorithm 1\"? \n10) In Theorem 7, how strong is the assumption $k = \\sqrt{n}$? \n\n11) Algorithm 2 takes $\\epsilon$ and $\\delta$ as parameters, but I couldn't find them in the algorithm's steps. How are they used? Also, what is the range of these parameters? \n\n12) Theorem 8 is unclear. Why do we need $k$ and $\\ell$ to satisfy two inequalities and then set them equal from another inequality? Why not use only the equality case? Considering $\\epsilon$ and $\\delta$ between 0 and 1, for larger $\\ell$, we have:\n$\nk \\geq \\sqrt{\\frac{8n^2 \\varepsilon^{-2} \\log(4/\\delta)}{\\ell}}.\n$\nHowever, this assumption seems unrealistic as $k$ scales with $n$.\n\n13) Some parts of Algorithm 3 need clarification. In Line 2, \"lg\" should be \"log\". Why should $N$ be selected in that way? This requires more explanation. Also, what is $1^n$? \n\n14) In Line 246, which algorithm are you referring to? \n15) It is unclear how the total runtime in Line 224 was obtained. Is this result similar to Theorem 4 under comparable assumptions? \n\n16) What is Figure 3(a)? I couldn’t find it. \n17) What is the error definition in the discussion on \"Efficiency of kNN Attention\" (Lines 421–425)? \n18) What do you mean by convex and non-convex cases? In both settings, the attention approximation problem is non-convex due to softmax and the product of $Q$, $K$, and $V$. \n\n19) In the experiments section, you refer to “kNN” and “our algorithms” when comparing them with exact gradients or attention mechanisms. It is still unclear what \"kNN\" and \"our algorithms\" refer to in the context of this paper. \n\n20) What specific gains can be observed from the experiments in Sections 4.2 and 4.3? Should we expect time improvements in these sections, for example, in the NanoGPT case? How can we precisely measure the efficiency of the proposed methods, especially for self-attention gradient computation?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper presents a theoretical analysis of kNN attention (Roy et al., 2021), connecting it to Gumbel noise sampling and Markov Chain methods for efficient gradient approximation. \n\n- I think the paper is well-structured. \n\n- Additionally, the author provides empirical experiments that demonstrate scalability and effectiveness in reducing computational and memory overhead."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the theoretical aspects of k-Nearest-Neighbor (kNN) attention (Roy et al., 2021) and addresses the challenge of quadratic complexity in self-attention. \n\nIt reformulates self-attention as expectations over softmax distributions, leveraging Lazy Gumbel Sampling (Mussmann et al., 2017) for efficient approximation. Then, novel sub-quadratic algorithms are introduced for approximating self-attention gradients using Markov Chain methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The authors need to clarify their contributions in the introduction and compare the theoretical novelty to Mussmann et al. (2017). The differences between this work and prior works are not clear until one reads the algorithms and theorems in detail. \n\n- Many theorems are presented without proofs or discussions (please see my questions below). This is problematic as even if the proofs are clear to the authors, they should provide proper references or detailed discussions on the theorems. \n\n- The literature contains many randomized attention methods, such as Nyströmformer and Skyformer. These should be discussed in the related works. Adding benchmarks from these methods would also be useful.\n\n- In the experiments section, some parts need clarification (please see my questions). For example, the authors used \"kNN\" as a legend, but it is unclear whether this refers to standard kNN or their modified version. Similarly, in several places, they mention \"our algorithm\" without specifying which algorithm they are referring to."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- In Sections 2.2 and 2.3, two approximation methods are proposed. Can their performance be compared directly?\n- In Section 4.1, the experiments are conducted with the matrices $Q,K$, and $V$ sampled from a uniform distribution. How would the performance change if $Q,K$, and $V$ were sampled from more biased distributions, such as those encountered in real-world data?\n- In Figure 3(a), why does the computational time for $k=n^{1/4}$ outperform $k=\\sqrt{n}$? Also, what does \"Brute Force\" refer to in this figure?\n- In Section 4.2, would it be possible to include a comparison of computation speeds?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper excellently ties the use of $k$-NN in self-attention to the context of lazy Gumbel sampling by framing self-attention as an expectation calculation.\n- Clear pseudocode is provided for each algorithm.\n- The authors have made their experimental code publicly available."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper provides a theoretical analysis of $k$-NN attention, which is often explored as a method to reduce the quadratic time complexity of self-attention in Transformers. \nSpecifically, it demonstrates that by combining $k$-NN with lazy Gumbel sampling, an unbiased estimate of self-attention can be obtained within an $\\epsilon$-multiplicative error with sub-quadratic time and space complexity. \nAdditionally, the paper proposes a method for approximating the gradient of self-attention within $\\epsilon$-additive error using random walk simulation, also achieving sub-quadratic time complexity."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- **Lack of originality**\n + At the beginning of Section 2.1, the authors claim that the first contribution of this paper is interpreting the softmax computation in self-attention as an expectation calculation. However, this interpretation itself is not a novel idea. For example, see [1].\n + Much of Section 2 simply applies results from [2] to the computation of self-attention. The authors should cite [2] in Theorems 5, 7, and 8, as well as in Lemma 6.\n- **Presentation issues**\n + The reference to [2] is listed as an arXiv preprint, but it was accepted at Uncertainty in Artificial Intelligence 2017.\n + In equation (3), the dot is used inconsistently between $q_i^\\top \\cdot k_k$ and $q_i^\\top k_s$.\n + The time complexity in Theorem 2 should be clarified as time complexity in expectation.\n + The citation format on line 97 of page 2 should be consistent with others.\n + \"BIN\" in Algorithm 1 is not defined.\n + The term \"kNN index\" lacks sufficient explanation.\n + The sentence \"if we assume that...\" on line 222 of page 5 was unclear to me; further clarification would be appreciated.\n + It would be helpful to clearly indicate which parts of Algorithm 2 constitute pre-processing.\n + Theorem 8 seems to rely on Theorem 3.5 from [2], which requires the assumption that $V_{sj}$ is bounded. If, like Theorem 2, the assumption $\\\\|V\\\\|_{\\infty} = O(\\log n)$ is used, this should be explicitly stated in the theorem.\n + The reference to Figure 3(a) and (b) is ambiguous; it seems to refer to the figure at the top of page 9. If so, the figure at the bottom of page 9 should be renumbered as Figure 4.\n + Numbered and unnumbered equations are mixed inconsistently.\n\n[1] Kratsios, Universal Regular Conditional Distributions. 2021. \n[2] Mussmann et al., Fast Amortized Inference and Learning in Log-linear Models with Randomly Perturbed Nearest Neighbor Search. 2017."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. The Authors cast it in the paper as an open question, yet I wonder whether they have any intuition how the optimal values of k can be derived in the more mathematically principal way (rather than just empirically), by leveraging the theoretical framework that was already developed in the paper. \n\n2. Did the Authors try to apply proposed in the paper sub-quadratic attention-gradient algorithm for other modalities than text, e.g. in the context of the ViTs ?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "Great paper, that sheds new light on sparse attention mechanisms leveraging the notion of the kNN-graph. The probabilistic interpretation of the attention mechanism is actually well-known in the literature, yet it is very elegantly applied here to conduct a rigorous theoretical analysis of the method. New algorithm for the sub-quadratic computations of the attention gradients is yet another contribution of very practical impact. The idea to approximate the expectation coming from the probabilistic interpretation of the attention module via lazy Gumbel Noisy sampling is yet another beautiful insight that the paper provides."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper provides a theoretical analysis of the KNN-nearest-neighbor sparse attention techniques for efficient attention computations. In that setting, every token attends only to its k nearest neighbors. By leveraging developed theoretical framework, the Authors propose a novel algorithm for the sub-quadratic time approximation of the self-attention gradients for efficient Transformer-training (default computations involving attention modules in Transformers require quadratic time in the sequence length, a prohibitively large time complexity for longer input sequences). The conducted analysis leverages an interpretation of the attention mechanism as computing the average value vector with respect to the softmax distribution defined on all the keys. Experimental evaluations confirms Authors' theoretical findings."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper might further benefit from placing newly developed sparse attention mechanisms in the context of other efficient attention methods, e.g. those based on low-rank linear attention. It is also not clear how to choose the optimal value k, since, as the Authors explain, practically optimal value k is often significantly smaller than \\sqrt{n}."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This paper proposes a theoretical framework for kNN attention and develops novel algorithms for sub-quadratic attention gradient estimation in Transformers."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024knn,\ntitle={\\$k\\${NN} Attention Demystified: A Theoretical Exploration for Scalable Transformers},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=49v8meXjHS},\nnote={under review}\n}"
},
"abstract": {
"value": "Despite their power, Transformers \\citep{vaswani2017attention} face challenges with long sequences due to the quadratic complexity of self-attention. To address this limitation, methods like k-Nearest-Neighbor ($k$NN) attention have been introduced \\citep{roy2021efficient}, enabling each token to attend to only its $k$ closest tokens. While $k$NN attention has shown empirical success in making Transformers more efficient, its exact approximation guarantees have not been theoretically analyzed. In this work, we establish a theoretical framework for $k$NN attention, reformulating self-attention as expectations over softmax distributions and leveraging lazy Gumbel sampling \\citep{mussmann2017fast} with $k$NN indices for efficient approximation. Building on this framework, we also propose novel sub-quadratic algorithms that approximate self-attention gradients by leveraging efficient sampling techniques, such as Markov Chain-based estimation. Finally, we demonstrate the practical effectiveness of these algorithms through empirical experiments, showcasing their benefits in both training and inference."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"efficient transformers",
"self-attention mechanism",
"sublinear algorithms",
"sampling",
"k-nearest neighbors"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/f860eb1e4a5e76c47e807416be8293cfd2de38d3.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "$k$NN Attention Demystified: A Theoretical Exploration for Scalable Transformers"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
4A9IdSa1ul | Label Correlation Biases Direct Time Series Forecast | main | Active | Time series;Long-term Forecast | learning on time series and dynamical systems | 3;6;6;6 | 4;4;3;4 | 3;3;3;4 | 1;3;3;3 | 3;3;3;3 | 5.25 | 3.75 | 3.25 | 2.5 | 3 | -0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. In the experiments, how to determine the hyperparameters of FreDF and baselines?\n2. According to the ablation study, the frequency term seems to be almost all that's needed. Is it correct?\n3. During my attempt to reproduce the experiments, I encountered an error (`AttributeError: 'Exp_Short_Term_Forecast' object has no attribute 'seasonal_patterns'`) when running the short-term forecasting experiments. Is this error expected, or does it indicate a potential issue in the codebase?\n\nOverall I think this work is interesting and promising. I will consider raising my score if my questions could receive positive responses, especially my major concerns (W1, Q1, Q3)."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "1. The authors identify and address the bias introduced by label correlation in time-series modeling, which is a novel issue for me and holds substantial potential generality across different scenarios.\n2. The method is sound, straightforward and shows very promising results. The theoretical results are relatively persuasive, demonstrating the bias caused by label correlation, and subsequently FreDF's elimination of label correlation and thereby bias.\n3. The experiment is comprehensive. Extensive set of experiments showed that: the approach contributes to the state-of-the-art, different components of the loss contribute to performance, the approach is robust to hyperparameter values."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work introduces a new loss term for multi-step time-series forecasting that penalizes errors within a decorrelated representation space of the ground truth labels. In its current formulation, this decorrelated space is defined as the frequency representation of both labels and forecasts. Experimental results demonstrate that the proposed approach significantly enhances forecasting accuracy across various datasets and base models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. It seems that this work aims to penalize errors in a space that decorrelates labels. Experiments mainly achieve de-correlation via FFT or FFT2. It will be beneficial to explore the efficacy of other transformations beyond the Fourier transform?\n2. While the introduction discusses various forecasting models, it could benefit from a stronger focus on established forecasting paradigms (e.g., direct forecasting, iterative forecasting). The inclusion of forecast models (iTransformer, Linear) may detract from highlighting the contribution and role of FreDF that seems to be orthogonal to forecast models.\n3. The source code should be refined. The current implementation comprises numerous scripts, and it is somewhat unclear how each script relates to the experiments discussed in the manuscript. Besides, the current environment setup appears to depend on unexpected repositories like `torchmetrics` and `patool`. Including a `Dockerfile` or `conda` environment file would help a lot."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Computational complexity of frequency domain conversion: Will the frequency domain conversion mentioned in the paper significantly increase computational complexity? How efficient is the FreDF method when dealing with large-scale datasets?\n2. Generalization ability: How does the FreDF method generalize across different domains and datasets? Has it been tested and validated in more practical application scenarios?\n3. Applicability of frequency domain features: How effective is the FreDF method for data with unclear or difficult to extract frequency domain features? Are there any relevant experimental results or analysis?\n4. Model interpretability: Does the FreDF method have interpretability in the frequency domain? How do users understand and interpret these predicted results?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Solving the problem of autocorrelation in label sequences: The FreDF method proposed in this paper effectively bypasses the autocorrelation problem in label sequences by predicting in the frequency domain. This is a common but not fully addressed problem in existing direct prediction models.\n2. Compatible with multiple prediction models: The FreDF method is not only suitable for existing state-of-the-art methods such as iTransformer, but can also be compatible with multiple prediction models. This compatibility makes it widely applicable in different prediction tasks.\n3. Significant improvement in predictive performance: Experimental results show that the FreDF method is significantly superior to existing methods in multi-step prediction tasks. This indicates that the method has high accuracy and reliability in processing complex time series data.\n4. Innovative applications of frequency domain analysis: By introducing frequency domain analysis into time series prediction, the FreDF method provides a new perspective to address autocorrelation issues in time series. This innovative application not only improves predictive performance, but also provides new directions for future research."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Considering that in current time series prediction, DF only reuses general multi task learning methods without considering the dependency relationships between labels. Especially, even if there is no correlation between labels, the above loss function can be used to train a multitasking model. When training the model, DF attempts to minimize the error between the predicted label sequence and the true label sequence; This assumes that the label sequence is conditionally independent between different time steps, thereby ignoring the correlation between each time step within the label sequence. So the author uses fast Fourier transform in the frequency domain to transform the data from a temporal perspective to a frequency perspective, in order to suppress autocorrelation. And the theoretical derivation of using frequency domain conversion to suppress time step correlation is given. A loss function combining time domain and frequency domain is designed, and the effectiveness of the design method is demonstrated by experimental results."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Complexity of frequency domain conversion: Although frequency domain analysis can bypass the autocorrelation problem in label sequences, frequency domain conversion itself may increase computational complexity and time cost. This may become a bottleneck when dealing with large-scale datasets.\n2. Model generalization ability: Although FreDF performed well in experiments, its generalization ability in different fields and datasets still needs further validation. Especially in practical applications, the model may need to be adjusted and optimized for specific tasks.\n3. Dependence on frequency domain features: The FreDF method relies on the extraction and utilization of frequency domain features, which may limit its applicability in certain situations. For example, for data with unclear or difficult to extract frequency domain features, the performance of FreDF may not be as expected."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Are there any settings where defining the errors by transferring the predictions to the frequency domain is detrimental to the prediction accuracy?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Originality: The idea of augmenting an objective function with a loss in alternate bases that is more robust to value dependences in prediction sequences, in this case Fourier bases, is novel and interesting.\n\nSignificance: The extension applies to a subclass of direct forecasting models that predict components of future sequences independently, ignoring dependences that may exist among them. This makes the method potentially applicable to a variety of SOTA time series models. \n\nQuality: The authors attempt to analyze the problem of using simple additive error functions for predicting sequences and its fixes via errors in the FFT transformed space both theoretically and experimentally. Extensive experimentation across multiple datasets and tasks show the merits of the proposed approach compared to SOTA baselines.\n\nOther strengths: Exploration of properties and extension of the proposed framework and its benefits, such as, different prediction length, ablation models, possible new transformations and errors defined on these transformations. \n\nCode: Authors provide the code for reproducing the experiments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a new Frequency-enhanced Direct Forecast method for time series predictions that aims to improve the predictions generated by direct forecast (DF) models by training the models using a combination of standard MSE errors and errors defined in the FFT transformed space. The new approach is motivated by the fact that the MSE errors for direct forecast (DF) models are biased and do not properly account for correlations in predicted sequences, while errors on the FFT on the predicted sequences may be more robust to such dependencies. The experiments on multiple datasets and multiple baseline DF models demonstrate the improved performance of the new method over baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Limited analysis of a bias in Theorem 1: Theorem 1 considers only univariate sequences, cross-correlation terms are not accounted for in the Bias formula. The sequence has dimensionality D. \n\nNotational inconsistency and formula errors in the paper: \n- The use of 'L' is overloaded, it denotes both the length and inputs of the input sequence. \n- Equation 1 has a mistake. Y and Y_hat should be compared on the same indexes. \n- Definition 3.2, uses 'j' instead of “i” \n\nThe main results in the paper do not report performance over distinct random seeds. Given this is an experimental study, reporting the results on multiple distinct random seed could help to understand its sensitivity.\n\nThe short-term forecasting task results in the appendix report only qualitative time spans. It would be good to specify the results in terms of forecasting lengths, similar to the Table 1 for long-term forecasting."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See Weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The presentation of the paper is clear and the idea is straightforward."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes to measure the distance between time series prediction outputs and target signals in the frequency domain. \nAs claimed, this allows to capture label autocorrelation in time series prediction tasks. Speciafically, in practice, distances in both time and frequency domains are utilized for training in a combined manner."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The proposed approach lacks novelty. Measuring time series structure (e.g., autocorrelation) in the frequency domain has been well-explored. Several prior works have investigated capturing time series characteristics or enhancing robustness in loss functions, including:\n 1. Incorporating Fourier transformation in loss functions [1,2].\n 1. Employing DTW-based loss to keep shape information of time series [3,4].\n 1. Utilizing multiresolution trends during training [5,6].\n\nThese approaches are all relevant to this study. However, this paper does not adequately investigate, discuss, or compare these related methods.\n\n[1] Henning Lange, et al. From Fourier to Koopman: Spectral Methods for Long-term Time Series Prediction. JMLR 2021.\n\n[2] Xinyu Yuan and Yan Qiao. Diffusion-TS: Interpretable Diffusion for General Time Series Generation. In ICLR 2024\n\n[3] Vincent Le Guen and Nicolas Thome. Shape and Time Distortion Loss for Training Deep Time Series Forecasting Models. In NeurIPS 2019.\n\n[4] Vincent Le Guen and Nicolas Thome. Probabilistic time series forecasting with shape and temporal diversity. In NeurIPS 2020.\n\n[5] Shiyu Wang, et al. TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting. In ICLR 2024.\n\n[6] Amin Shabani, et al. Scaleformer: Iterative Multi-scale Refining Transformers for Time Series Forecasting. In ICLR 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Learning to forecast in the frequency domain significantly enhances forecasting performance."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024label,\ntitle={Label Correlation Biases Direct Time Series Forecast},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=4A9IdSa1ul},\nnote={under review}\n}"
},
"abstract": {
"value": "Time series modeling is uniquely challenged by the presence of autocorrelation in both historical and label sequences. Current research predominantly focuses on handling autocorrelation within the historical sequence but often neglects its presence in the label sequence. Specifically, emerging forecast models mainly conform to the direct forecast (DF) paradigm, generating multi-step forecasts under the assumption of conditional independence within the label sequence. This assumption disregards the inherent autocorrelation in the label sequence, thereby limiting the performance of DF-based models. In response to this gap, we introduce the Frequency-enhanced Direct Forecast (FreDF), which bypasses the complexity of label autocorrelation by learning to forecast in the frequency domain. Our experiments demonstrate that FreDF substantially outperforms existing state-of-the-art methods and is compatible with a variety of forecast models. Code is available at https://anonymous.4open.science/r/FreDF-0FB1."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Time series",
"Long-term Forecast"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/f098078d0997928a7574fd1b547cf77e4a6a0830.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on time series and dynamical systems"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Label Correlation Biases Direct Time Series Forecast"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
4AlNpszv66 | Identifying Feedforward and Feedback Controllable Subspaces of Neural Population Dynamics | main | Active | Control Theory;Systems Neuroscience;Dimensionality Reduction | applications to neuroscience & cognitive science | 3;3;5;8 | 2;4;3;4 | 3;1;2;4 | 3;2;2;4 | 1;1;3;3 | 4.75 | 3.25 | 2.5 | 2.75 | 2 | 0.478861 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "How would Figure 3b look if compared with reduced rank regression with the relevant number of dimensions? i.e., what is the best decodability with a limited number of dimensions? To follow up on this further, are the dimensions identified by reduced rank regression also somewhat orthogonal to those identified by PCA?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The paper is well-motivated and well-reasoned. The findings are very interesting and highly relevant to the neuroscience and data analysis communities. The conceptual advances are very high."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper provides a very interesting way of thinking about relevant subspaces in recorded data. Firstly, the paper relates PCA to a specific form of feedforward control, and shows that PCA recovers the dimensions in the neural data that are most affected by an external control input. Secondly, the paper identifies a low-dimensional controller to control recorded dynamics, and defines the most controllable dimensions as the 'Feedback Controllability' components. This paper provides a conceptual advance to the field and further shows that non-normal dynamics that arise in constrained networks lead to orthogonal PCA and FCCA components, and that the feedback controllable components provide a good reconstruction of behavior."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The assumptions that the input is purely white noise is potentially problematic. Could the authors show, at least in simulation, that their main results hold with temporally filtered signals?\n\nWhile Theorem 1 is helpful for the paper, it may not be necessary to restate it in its entirety, or might be sufficient in the Appendix.\n\nSome key references may be missing, such as to the Henrici metric for non-normality."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I have no problem with the math and methods developed in this study, but I have a major conceptual question about feedback control.\n\nConceptually, feedback control means the controller sends feedback signals back to the system (neural circuits) based on the system's outputs. Conventionally, people think the behavioral output is a (feedforward) readout of the internal state of neural circuits. Then a gap is why behavioral outputs need to send feedback to neural circuits. Further, why do behavioral outputs need even to identify the feedback controllable subspaces? Does this operation complicate the information processing or bring some actual benefits to the brain?\nI can accept that the motor system and motor output (behavior) need feedback control, but I am still struggling to understand why the sensory cortex also needs that (S1 random in Table 1). I think some conceptual explanations about this are quite helpful, rather than just using this as a data analysis method to show its improved performance."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. It seems to be novel to apply the feedback controllable subspace to analyze neural data, while the theory of feedback control (subspace) theory has been well grounded.\n\n2. The writing of the paper is good and structure-wise. The math derivations are well laid out."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This study proposes a method to identify feedback controllable subspaces from neural population responses. It shows the feedback control can identify different subspaces with feedforward control, and can explain behavioral output better."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The data analysis seems a bit preliminary. I suggest the author could identify more intricate details of the identified subspaces from feedforward and feedback controls, and provide physical and behavioral interpretations of these subspaces if possible.\n\n2. The equation just below Eq. 6: should the last $x$ be $x_0$?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. The cost function for LQR (line 173) is ambiguous. If the integral is Riemann, there should be no $\\frac{1}{T}$ before the integration [Burl (1998), p.283]. If it is instead an Ito integral, please indicate this and use the notation in [Jonckheere & Silverman (1983)], as this refers not to the limit but to the “limit in the mean.”\n\n---\n2. The proof of Theorem 2 (Appendix A.3) is difficult to follow due to the contradictions and typographical errors noted in the weaknesses. Once the above weaknesses are resolved, please review the proof of Theorem 2 carefully to ensure its notations are consistent with the main text. Additionally, provide references for any equations that are sourced from other works but not derived here.\n\n---\n3. [Page 7, footnote] The normal matrix can be neither symmetric nor orthogonal. $A$ is a normal matrix iff $AA’ = A’A$. One example is\n$$\nA=\\left[\\begin{array}{ccc}\n1 &1 & 0 \\\\\n0 & 1 & 1 \\\\\n1 & 0 & 1\n\\end{array}\\right]\n\\Rightarrow\nAA' = A'A = \\left[\\begin{array}{ccc}\n2 &1 & 1 \\\\\n1 & 2 & 1 \\\\\n1 & 1 & 2\n\\end{array}\\right]\n$$\nClearly, $A$ is neither symmetric nor orthogonal. Please fix this terminology throughout the paper.\n\n---\n4. A complete set of figures for other data types (M1 random, S1 random, and M1 maze) should be included in the appendix. Each dataset should have a figure similar to Figure 3, in addition to the information in Table 1. This is crucial to demonstrate that FCCA is applicable across multiple datasets, especially as this is the only data analysis presented in the paper.\n\n---\n**Minor problems:**\n1. [Line 129] In equation (3), the dummy variable $dt$ should be after $e^{At}BB'e^{A't}$.\n2. [Line 141] The volume of reachable state space is proportional to $\\sqrt{\\det(\\Pi)}$, not $\\det(\\Pi)$, since the volume formula of an $n$-dimensional ellipsoid is $\\frac{\\pi^{n/2}}{\\Gamma(n/2+1)} \\prod_{k=1}^{n} r_k$ where $r_{1:n}$ are radius of the ellipsoid.\n3. [Line 479] This is a typo. It should be Fig. 3b. There is no Fig. 3c.\n4. [Line 325-326] “It can be shown that… to the truncated system…” Please provide references for this statement since it can be shown."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "- Authors develop FCCA, which computes the feedback control invariant, $\\text{Tr}(PQ)$, directly through the observed neural state $x(t) \\in \\mathbb{R}^N$."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This manuscript proposes an algorithm, Feedback Controllability Components Analysis (FCCA), to identify the low-dimensional subspace critical for feedback control. The authors claim that FCCA is a dimensionality reduction method that encodes controllability within the data. The authors apply FCCA to both simulations and data analyses."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Throughout the paper, almost every equation lacks either a reference or a derivation, making it difficult for readers to follow. For instance:\n* Section 2.1: The Lyapunov equation (3) has no reference. A reference can be [Burl (1998), p.72].\n* Section 2.2: The cost functions of the Kalman filter and LQR, the dual Riccati equations (5) and (6), and their associated parameter equations for $Q$ and $P$ lack references. References can be, for example, [Burl (1998), p.243 & p.283] and [Jonckheere & Silverman (1983)].\n* Section 2.3: The final FCCA equation (11) has no derivation details. Considering it's the main result of this paper, the derivation should be step-by-step. Additionally, the authors do not provide pseudocode for FCCA, making the algorithm challenging to follow.\n\n---\n2. The notation throughout the paper is inconsistent, with numerous typos. For example, in Section 2.2, the term $QC’CQ$ in equation (5) should appear only when the observation model is $y(t) = Cx(t) + v(t)$, where $v(t)$ is Gaussian white noise with covariance $I_d$. Sources such as [Burl (1998), p.243] and [Jonckheere & Silverman (1983)] include $v(t)$, leading their Riccati equations to include $QC’CQ$. However, [Ljung & Kailath (1976)] exclude $v(t)$, and thus their Riccati equation does not include $QC’CQ$. This creates a contradiction:\n* The $QC’CQ$ term seems necessary in deriving FCCA in equation (9) because it is dual to the LQR Riccati equation (6).\n* However, since the neural state $x(t)$ represents the true observation and $y(t) = Cx(t)$ is only a low-dimensional projection, there should be no noise term $v(t)$.\nIt is unclear how the authors intend to resolve this contradiction. This further underscores the importance of providing step-by-step derivations from the base model to ensure clarity and avoid such contradictions, making it easier for readers to follow. The authors should clarify their assumptions about the observation model and provide a step-by-step derivation showing how they arrive at equation (5) from their model equation (2), particularly explaining the presence or absence of the $QC'CQ$ term.\n---\n3. [Kashima (2016)] has shown that the feedforward cost matrix (i.e., the controllability Gramian matrix $\\Pi$) is equivalent to the data covariance. Additionally, Section 2.2 on feedback controllability closely follows the results in [Jonckheere & Silverman (1983)], including Theorem 1 and the similarity invariant matrix $QP$. Therefore, the only novel contribution of this paper is the derivation of FCCA in Section 2.3. However, this derivation contains serious typographical errors and lacks sufficient detail.\n\nThe first issue with FCCA is that it should be $x_b(t) = \\Pi x_a(t)$ rather than $x(t) = \\Pi x_a(t)$ in line 239, which would allow the authors to derive the equation in line 241 from equation (8). Although this may seem like a minor typo, it is critical because $x_a(t)$ is no longer connected to $x(t)$. Since $x(t)$ represents the observed neural state rather than a latent state, it is unclear how the covariance and cross-covariance of $x_a(t)$ could be estimated from $x(t)$ as required in equation (11), the FCCA formula. The authors should provide a detailed explanation of how they connect $x_a(t)$ to the observed neural state $x(t)$, and how this connection allows for the estimation of covariances $\\tilde{P}$ in equation (11).\n\n---\n4. The second issue with FCCA is that the Riccati equation (9) for $\\tilde{P}$ is not the same as the Riccati equation (6) for $P$. It is unclear why the authors equate $\\text{Tr}(QP)$ with $\\text{Tr}(Q\\tilde{P})$. A detailed derivation linking these two expressions is necessary. Additionally, it is not explained why equation (9) aligns with the Riccati equation associated with the modified LQR cost function (10). Since $C’C$ is replaced by $\\Pi^{-1}BB’\\Pi^{-1}$ in (10), the final LQR Riccati equation should exclude $C’C$, which contradicts equation (9). A detailed derivation of this step is also required. The authors should provide a step-by-step derivation showing how equation (9) can be transformed into equation (6), so they're equivalent, and how it relates to the modified LQR cost function in equation (10).\n\n---\n5. The third issue with FCCA is that equation (11) lacks a derivation. This should be derived step-by-step, as it represents the main result of the paper. Furthermore, some details in equation (11) appear questionable. For instance, following my previous point that $\\Pi x_a(t) = x_b(t)$ rather than $x(t)$, the inverse matrix $\\Sigma_T^{-1}(C)$ of $\\tilde{P}$ in (11) should be based on $x_b(t)$, not $x(t)$. Therefore, this formula appears to be incorrect. These issues need to be clarified through a detailed derivation. The authors should provide a complete, step-by-step derivation of equation (11), starting from the basic assumptions and clearly stating any approximations or simplifications made along the way.\n\n---\n6. Beyond the theoretical issues mentioned, a more fundamental question arises: Why not simply fit the parameters $A$ and $B$ of the linear dynamical model in equation (2) directly and solve the dual Riccati equations (5) and (6) iteratively? This would yield the Gramian matrix $\\Pi$, the Kalman error covariance $Q$, and the LQR cost matrix $P$. This approach is straightforward since $x(t)$ is the observed neural signal, not a hidden latent state, making it feasible to fit $A$ and $B$ through simple linear regression. Is there any practical advantage to using FCCA equation (11) compared to this more direct approach? A detailed comparison between the FCCA method and the more direct approach, including computational complexity, accuracy, and any other relevant factors will be helpful.\n\n---\n7. The simulations do not validate the correctness of FCCA. To achieve this, the authors should simulate a linear dynamical system (which appears to be the LDS model in Figure 2), compute $Q$ and $P$ using the dual Riccati equations (5) and (6), and find $C$ by minimizing $\\text{Tr}(QP)$. Then, they should compute $Q$ and $\\tilde{P}$ using equation (11) and confirm that these values match $Q$ and $P$ from equations (5) and (6). Finally, the optimal $C$ derived from (11) must align with the original $C$ that minimizes $\\text{Tr}(QP)$. This verification should be straightforward since all system parameters are precisely defined in the simulations. Please perform this validation and include the results in a new figure or table in the paper.\n\n---\n8. The simulations also fail to validate the correctness of Theorem 2. The authors should simulate the case where $B = I_N$ and $A = A’$, which would make the angle between the PCA and FCCA subspaces equal to zero. In other words, Figure 2a should include the case where $\\|AA'-A'A\\|_F=0$.\n\n\n---\nOverall, while the overall idea of finding the subspace with feedback controllability is nice, the derivation of FCCA is incomplete and unconvincing and novelty is unclear compared with prior work. Furthermore, the simulations do not properly validate the method's correctness and the data analyses are limited and unconvincing for demonstrating that the method works.\n\n\n**References**\nBurl, J. B. (1998). Linear optimal control: H (2) and H (Infinity) methods. Addison-Wesley Longman Publishing Co., Inc.."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "- Is the controllability Gramian $\\Pi$ equal to the covariance of the stationary distribution of $x(t)$ defined in eq. (2)? I think this is a more intuitive description that could be stated earlier.\n- Notation in section 2.2: on line 182, $Q$ is set equal to the (arg)-min over probability distributions. It appears this means that the optimal distribution is Gaussian with covariance matrix $Q$. Are these minimizations equivalent since the optimal distribution is a mean-zero Gaussian? It might help to state this.\n- Line 188: What is the meaning of \"$P$ encodes the regulation cost incurred for varying initial conditions\"?\n- Line 222: What is causal vs acausal Kalman filter? Does acausal mean the Kalman filter applied to the backward process $x_b(t)$?\n- Theorem 2: If $A$ is symmetric, then isn't $Re(\\lambda(A))=\\lambda(A)$? If so, I'd suggest stating this."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Identifying relevant low-dimensional projections of high-dimensional neural data is a highly relevant topic. This is evidenced by the numerous methods that have been proposed in the literature. This paper proposes a compelling methodology that is based on the controllability of the neural state space. They build on a well-developed theory from the control theory community that is not well-known in the neuroscience community."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a dimensionality reduction method called **Feedback Controllability Components Analysis (FCCA)**. The method identifies the subspace of a linear dynamical system that is maximizes a measure of feedback controllability, which is defined in terms of the optimal Kalman filter. Specifically, the *feedback controllability* is the trace of the product of two positive definite matrices: one matrix encoding the covariance of the estimation error and another matrix encoding how the optimal control cost depends on initial condition. They compare FCCA with PCA and show that the angle between the subspaces depends on the non-normality of the dynamics matrix. They also apply their method to neural recordings from rat hippocampus, macaque primary motor cortex and primary somatosensory cortex. In each case, they find that the FCCA projection is more predictive of the animal's behavior than the PCA projection."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- I found the paper to be dense and required significant effort to understand (at least for me, the main reason for my presentation score of 1 and my overall recommendation for rejection). One of the stated goals of this paper is to highlight important ideas from the control theory community that are relevant to the neuroscience community. I do not think this paper lays out the ideas with sufficient clarity for the theoretical neuroscience community (my confidence score is 2 because I don't think I fully understand the paper). While some of this is perhaps unavoidable due to the material, I think the presentation could be substantially improved with an intuitive figure explaining the concepts in this paper. It would be very helpful to have an illustrative example (e.g. in 2D) of the relationship between $A$, $B$ and the Gramian $\\Pi$ and how PCA and feedback controllability subspaces differ. A large portion of the computational neuroscience community is quite familiar with 2D linear dynamical systems and I think this paper misses an opportunity to connect the results to prior understanding in the community.\n- FCCA is only compared with PCA in section 4. There are a multitude of methods for extracting subspaces beyond PCA; e.g., slow feature analysis, GFPA, LFADS, etc. Such comparisons seem important if the goal is to encourage practitioners to use FCCA."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024identifying,\ntitle={Identifying Feedforward and Feedback Controllable Subspaces of Neural Population Dynamics},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=4AlNpszv66},\nnote={under review}\n}"
},
"abstract": {
"value": "There is overwhelming evidence that cognition, perception, and action rely on feedback control. However, if and how neural population dynamics are amenable to different control strategies is poorly understood, in large part because machine learning methods to directly assess controllability in neural population dynamics are lacking. To address this gap, we developed a novel dimensionality reduction method, Feedback Controllability Components Analysis (FCCA), that identifies subspaces of linear dynamical systems that are most feedback controllable based on a new measure of feedback controllability. We further show that PCA identifies subspaces of linear dynamical systems that maximize a measure of feedforward controllability. As such, FCCA and PCA are data-driven methods to identify subspaces of neural population data (approximated as linear dynamical systems) that are most feedback and feedforward controllable respectively, and are thus natural contrasts for hypothesis testing. We developed new theory that proves that non-normality of underlying dynamics determines the divergence between FCCA and PCA solutions, and confirmed this in numerical simulations. Applying FCCA to diverse neural population recordings, we find that feedback controllable dynamics are geometrically distinct from PCA subspaces and are better predictors of animal behavior. Our methods provide a novel approach towards analyzing neural population dynamics from a control theoretic perspective, and indicate that feedback controllable subspaces are important for behavior."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Control Theory",
"Systems Neuroscience",
"Dimensionality Reduction"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a5ec03d7030e43d340ab90b0be465e04bf22f91a.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to neuroscience & cognitive science"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/b6d8cdfff4383d113ea8796baaefa0956781e596.zip"
},
"title": {
"value": "Identifying Feedforward and Feedback Controllable Subspaces of Neural Population Dynamics"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
4AuyYxt7A2 | Training-Free Message Passing for Learning on Hypergraphs | main | Active | Hypergraphs;Hypergraph Neural Networks;Graph Neural Networks | learning on graphs and other geometries & topologies | 3;5;6;10 | 4;3;4;4 | 2;2;3;4 | 1;3;2;4 | 3;3;2;4 | 6 | 3.75 | 2.75 | 2.5 | 3 | 0.226455 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper is well-structured and easy to follow.\n2. The summary of hypergraph neural networks is comprehensive, particularly with the insights provided in Table 1 and the related analysis."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes TF-HNN, a training-free hypergraph neural network that removes the need for computationally intensive message passing during training. By shifting hypergraph structural processing to the preprocessing stage, TF-HNN reduces computational complexity. The model achieves efficient, robust node feature generation without oversmoothing, utilizing as much structural information as traditional HNNs. Experiments show that TF-HNN outperforms state-of-the-art HNNs in accuracy and training speed, especially on large-scale benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Major weaknesses:\n1 Proposition 4.2 shows the similarity between the proposed method and APPNP [1], yet the paper does not cite APPNP. Specifically, Eq (6) and Eq (14) appear to apply APPNP after performing clique expansion on the hypergraph. This connection should be discussed more thoroughly, referencing APPNP's Eq (7) and Eq (8) to clarify the relationship and implications of this similarity in the context of hypergraph learning. While the experimental results demonstrate superiority, this oversight is a significant limitation on the paper's originality.\n\n2 Equations (4a) to (4d) result from removing learnable parameters from several baselines, corresponding to the different scenarios in the authors' proposed framework. While these modifications reasonably reduce training time, there is insufficient ablation study to explain the performance improvements. The authors' comparison of the impact from the weighted $S$ of TF-HNN in node classification is commendable. However, they should also present the performance of these modified baselines to clearly demonstrate the impact of removing the learnable parameters.\n\nMinor weakness:\n\nChien et al.'s work [2] provides the challenging YELP dataset, where many baseline methods yield unsatisfactory performance. Including results on this dataset would enhance the paper's quality and offer a more comprehensive evaluation of the proposed method.\n[1] You are allset: A multiset function framework for hypergraph neural networks. Chien et al. ICLR 2022 \n\n[2] Predict then Propagate: Graph Neural Networks meet Personalized PageRank\nGasteiger et al. ICLR 2019"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. An assumption is made about the structure of hypergraph i.e., absence of isolated nodes or empty hyperedges, for the theoretical results. What happens if isolated nodes or empty hyperedges are present? I am not able to see why this assumption is required, and what breaks if it is violated?\n2. It is commendable that the proposed TF-HNN performs significantly better than the baselines, but it is also a bit strange to see the baselines performing so poor, particularly on trivago. I understand the boost in training time, but not able to fully understand why there is a 10% improvement, it seems to me that the learning ability of any SOTA HNN should be similar to TF-HNN. I may have missed something, but curios to hear what the authors have to say on this."
},
"rating": {
"value": 10
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "1. The paper makes a significant contribution by addressing the issue of high computational complexity of Hypergraph learning algorithms. \n2. The proposed solution, TF-HNN is novel and elegant, which decouples the processing of structural information from the model training stage. \n3. Authors provide a strong theoretical foundation for TF-HNN, the unified framework presented in the paper links all the popular HNN approaches, which shows that TF-HNN is designed by keeping many existing methodologies in mind, and hence provides a comprehensive mechanism for efficient training.\n4. Extensive experiments on diverse real-world datasets for node classification and hyperedge prediction tasks demonstrate the competitive performance of TF-HNN against state-of-the-art HNN baselines while requiring significantly less training time.\n5. The paper is well-written, with a clear motivation, rigorous theoretical analysis, and thorough empirical evaluation supporting the proposed method's effectiveness and efficiency."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a novel approach called TF-HNN (Training-Free Hypergraph Neural Network) to address the high computational complexity during training in existing hypergraph neural networks (HNNs). The key innovation is a training-free message passing module (TF-MP-Module) that decouples the processing of hypergraph structural information from the model learning stage. The authors first derive a theoretical framework that provides a unified view of existing HNN approaches, identifying the feature aggregation function as the core component processing hypergraph structure. Based on this insight, they remove the learnable parameters and non-linear activations from the feature aggregation functions of four state-of-the-art HNNs to make them training-free. Further, they consolidate the feature aggregation across layers into a single propagation step, resulting in the proposed TF-MP-Module. Extensive experiments on seven real-world datasets for the tasks of node classification and hyper-link prediction demonstrate the competitive performance of TF-HNN, with very less training time. TF-HNN is the first approach to shift the processing of structure to pre-processing stage, which significantly enhances training efficiency."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I do not see any weak points in this paper. This is a very well written paper, with significant contributions. Please refer to the questions sections for the questions I have."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Isn’t too much time being spent on hyperparameter search due to the extensive hyperparameter search range?\n\nIf the hyperparameters were indeed selected based on the validation set, could you demonstrate this by providing heatmaps of the hyperparameters across the various validation and test sets used in the experiments?\n\n**minor comments**\n- line 209: shonw -> shown\n- line 144: Ortega et al. (2018) -> (Ortega et al. (2018))"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The design of the proposed method is very interesting.\n2. The proposed method is both highly efficient and effective.\n3. The paper is clearly written and well-organized, making the research easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces an efficient hypergraph learning scheme that performs message passing prior to learning neural network parameters. The proposed framework, called TF-HNN, includes a TF-MP-Module in which training-free message passing is performed. Using the updated features from the TF-MP-Module, TF-HNN learns an MLP model without a heavy computational burden. Despite its high learning efficiency, TF-HNN demonstrates either superior or competitive performance in hypergraph learning tasks. Theoretical analysis supports the design of the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "It appears that there is an issue regarding the hyperparameters. The combinations of hyperparameters used in the experiments shown in Table 13 and Table 14 are quite diverse. For example, the value of alpha ranges from 0.05, 0.15, 0.3, 0.6, 0.65, to 0.7. The learning rate also varies, with values like 0.0006, 0.0001, 0.005, 0.001, and 0.0002. What method was used for hyperparameter search? Additionally, upon reviewing the attached anonymous GitHub link, it appears that the optimal hyperparameters were selected based on performance on the test set rather than the validation set. Were the hyperparameters selected in a fair manner? An analysis of hyperparameter sensitivity should be added."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Refer to the Weakness section."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- S1. Given that real-world hypergraphs are often large, scalable HNNs are necessary in practice. \n- S2. Although the model design is quite trivial, the theoretical motivation (i.e., building a general unified framework and simplifying it) of the model design is systematic and interesting.\n- S3. Experiments are comprehensive."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose a scalable hypergraph neural network (HNN) that reduces the computational cost associated with message passing at each training epoch by decoupling the message passing step.\n\nTo achieve this, they (1) establish a general framework that explains popular HNN models, and (2) simplify this framework by removing learnable components and non-linearity, resulting in a single linear operator.\n\nThey demonstrate the effectiveness of their approach on both small- and large-scale hypergraphs, showing improvements over several existing HNN models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "***Major comments.***\n\n- ***W1. Regarding Proposition 3.2.*** My understanding is that the key idea is the existence of a clique expansion that satisfies the property outlined in Proposition 3.2. \nIn practice, however, the authors use a fixed clique expansion as described in Equation 5. \nTo what extent does this chosen clique expansion align with the one referenced in Proposition 3.2? \nWhile the exact formulations may differ, it would be helpful to know if the high-level characteristics of these clique-expanded graphs are similar. \nThis is essential, in my view, as it clarifies whether the theoretical analysis effectively supports the proposed method.\n\n- ***W2. Regarding Proposition 4.1.*** Could you clarify what is meant by the \"entropy of information\"? Does this refer to mutual information between features and node labels? Further elaboration on this point would help in understanding the key takeaway from this proposition.\n\n- ***W3. Regarding the Initial Message Passing Operator Computation Complexity*** Although the message passing operator incurs a one-time computation cost, the time required for this process should be reported. If this initial computation is substantial and exceeds the typical training time of existing HNNs, it could limit the practical efficiency of the proposed method.\n\n***Minor comments.***\n- In Lines 52-53, the text mentions that $n^{k}$ memory is required. While this is accurate for dense tensor formats, typical tensor representations are stored in a sparse format, and sparse operations are well-supported in modern deep-learning libraries. Thus, storing a dense incidence tensor is generally not necessary in practice. It may be helpful to revise this part to reflect real-world scenarios.\n- In Lines 79-80, the period \".\" is missing.\n- Please provide clarification on the source of the datasets used.\n\nPlease let me know if I have misunderstood any parts. Thank you."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose a novel model called TF-HNN, which addresses the challenge of efficient training of hypergraph neural networks while retaining effectiveness."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024trainingfree,\ntitle={Training-Free Message Passing for Learning on Hypergraphs},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=4AuyYxt7A2},\nnote={under review}\n}"
},
"abstract": {
"value": "Hypergraphs are crucial for modelling higher-order interactions in real-world data. Hypergraph neural networks (HNNs) effectively utilise these structures by message passing to generate informative node features for various downstream tasks like node classification. However, the message passing module in existing HNNs typically requires a computationally intensive training process, which limits their practical use. To tackle this challenge, we propose an alternative approach by decoupling the usage of hypergraph structural information from the model learning stage. This leads to a novel training-free message passing module, named TF-MP-Module, which can be precomputed in the data preprocessing stage, thereby reducing the computational burden. We refer to the hypergraph neural network equipped with our TF-MP-Module as TF-HNN. We theoretically support the efficiency and effectiveness of TF-HNN by showing that: 1) It is more training-efficient compared to existing HNNs; 2) It utilises as much information as existing HNNs for node feature generation; and 3) It is robust against the oversmoothing issue while using long-range interactions. Experiments based on seven real-world hypergraph benchmarks in node classification and hyperlink prediction show that, compared to state-of-the-art HNNs, TF-HNN exhibits both competitive performance and superior training efficiency. Specifically, on the large-scale benchmark, Trivago, TF-HNN outperforms the node classification accuracy of the best baseline by 10% with just 1% of the training time of that baseline."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Hypergraphs",
"Hypergraph Neural Networks",
"Graph Neural Networks"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/bcc8195b0be5005d6b3dea992adafd83bb3755fc.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on graphs and other geometries & topologies"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Training-Free Message Passing for Learning on Hypergraphs"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
4BFzTrIjPN | CONGO: Compressive Online Gradient Optimization | main | Active | online convex optimization;compressive sensing;regret analysis | optimization | 6;6;6;6 | 2;3;2;3 | 3;3;3;3 | 3;3;3;3 | 2;3;3;3 | 6 | 2.5 | 3 | 3 | 2.75 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The regrets in Theorem 2 and 3 holds in expectation, but if understanding is correct, they also hold with high probability?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is overall well written, and presents the setup, results, and proof clearly. The algorithms proposed appear efficient in terms of the regret and the sampling complexity."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies zeroth-order online convex optimization, where the gradient of the objective function is assumed to be sparse. The proposed algorithms, CONGO, combine the (projected) gradient descent algorithm for online convex optimization, with a gradient estimation procedure using compressive sensing technique. The regret is proven to be O(\\sqrt(T)) and does not depend on the dimension of the problem, and the per-iteration sampling complexity scales with the sparsity level of the gradient. Experiments confirm the effectiveness of the algorithms."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "-- One might argue that the results are not too surprising: the regret follows from the regret of online gradient descent, while the sampling complexity follows from the compressive sensing results. \n\n-- In CONGO-B, line 827 – 829, gradient recovery requires solving an LP, which can be computationally inefficient, especially in high-dimensional setting. In addition, compressive sensing usually requires knowledge of the sparsity level before setting the number of samples. If such knowledge is lacking or inaccurate, compressive sensing might fail completely [1]. \n\n[1] Amelunxen, Dennis et al. “Living on the edge: phase transitions in convex programs with random data.” Information and Inference: A Journal of the IMA 3 (2013): 224-294."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Could the authors clarify the fundamental difference between the proposed method and AISTATS'18 paper mentioned above? Please highlight the advantages of the proposed method.\n\nIn the numerical experiments, did you observe any phase transition on the number of measurements to the convergence speed of the proposed algorithms?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This is a well-written paper in general, the idea of introducing compressed sensing for estimating the gradients is very inspiring. The numerical performance of the proposed scheme is excellent. The presentation of the paper is very clear and easy to read."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed a framework for online zeroth-order optimization leveraging the techniques and insights from compressed sensing. The authors provide several schemes for efficiently sampling the objective functions values and estimate the gradients, alongside with theoretical convergence proofs revealing the fast convergence rates. The numerical results demonstrate this approach's superior performance over state-of-the-art baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The novelty of the proposed scheme may be potentially limited (rebuttal against this point is welcomed as the reviewer is not familiar with zeroth-order optimization literature). The reviewer has seen similar approach been proposed in Wang et al, \"Stochastic zeroth-order optimization in high dimensions\" AISTATS'18, where they utilized a very similar idea but used LASSO (L_1) instead of CoSAMP (L_0). The numerical study did not considered this AISTATS'18 paper as a baseline, although being cited in the reference."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The theoretical analysis assumes exact gradient sparsity, which may not hold in all practical scenarios. Could the authors discuss potential extensions of their theoretical analysis to approximately sparse gradients, or provide insights on how performance changes as the level of sparsity decreases? This could help clarify the framework's robustness and practical applicability.\n\n2. While the regret bounds are dimension-independent, sample complexity grows logarithmically with dimension. In very high-dimensional settings, how does this sample complexity impact practical performance?\n\n3. The framework is compared to standard SPSA-based methods, but how does it compare to other advanced sparse optimization or regularized gradient estimation techniques?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The use of compressive sensing within an OCO framework is a fresh and well-motivated idea. By focusing on sparse gradients, the authors address both sample efficiency and dimensionality reduction, which are critical in high-dimensional settings. \n\n2. The authors provide rigorous theoretical analysis, establishing regret bounds that demonstrate sublinear scaling with respect to the problem horizon, independent of the problem dimension.\n\n3. The three algorithmic variants, CONGO-B, CONGO-Z, and CONGO-E, offer a nice balance of performance and complexity. For instance, CONGO-E uses Gaussian matrices and CoSaMP for enhanced performance, while CONGO-Z is more sample-efficient."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a framework designed for zeroth-order online convex optimization (OCO) in environments where gradients are often sparse. The core idea is to combine compressive sensing techniques with online optimization to take advantage of this sparsity, which allows for efficient high-dimensional optimization with fewer samples. \n\nThe authors propose three variations—CONGO-B, CONGO-Z, and CONGO-E—that utilize different compressive sensing approaches and gradient estimation methods. They back up their approach with theoretical guarantees, showing sublinear regret bounds, and validate the framework through experiments on both synthetic and real-world tasks (like microservice autoscaling)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The theoretical analysis assumes exact gradient sparsity, which may not be realistic for all real-world problems.\n\n2. While CONGO outperforms standard gradient descent with SPSA, it’s mostly compared against methods that don’t leverage sparsity. A more comprehensive comparison with advanced sparse optimization techniques or regularized gradient estimators would help here."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. How sensitive is the proposed algorithm to sparsity? What would happen if the sparsity measure $s$ is large, will the performance still be comparable to other existing methods?\n\n2. Could adaptive strategies be used for the sparsity?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The idea of using compressing sensing to estimate gradients more efficiently is novel. From the theory provided, the total cost of function value evaluation is reduced which makes it suitable for problems of large dimension.\n\n2. Based on the specific setting and the measurements used, three different variants are proposed, which makes the framework flexible.\n\n3. The paper presents theory on the regret bound and validate the theory with experiments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduced the compreesive online gradient optimization framework (CONGO) for solving sparse online convex optimization problems based on motivating examples in real world application. Three variants using simultaneous perturbation and compressive sensing are proposed based on different inspiration. The proposed algorithms were validated both theoretically and experimentally."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While the paper is generally interesting and the idea is novel, the reviewer wants to point out that the paper contains *formatting errors*, specifically, the heading \"Under review as a conference paper at ICLR 2025\" is missing at each page.\n\n1. It seems to the reviewer that the proposed method needs careful tuning of the parameters, such as the step size and the sparsity, which may require additional information which makes the proposed algorithm less practical. \n\n2. CONGO-B seems to be less efficient and more unstable compare to the other variants, this is not explained in theory.\n\n3. The proposed assumptions are a little restrictive in reality."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A compressive sensing based approach to online convex optimization and its application to the optimization of queueing networks"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024congo,\ntitle={{CONGO}: Compressive Online Gradient Optimization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=4BFzTrIjPN},\nnote={under review}\n}"
},
"abstract": {
"value": "We address the challenge of zeroth-order online convex optimization where the objective function's gradient exhibits sparsity, indicating that only a small number of dimensions possess non-zero gradients. Our aim is to leverage this sparsity to obtain useful estimates of the objective function's gradient even when the only information available is a limited number of function samples. Our motivation stems from the optimization of large-scale queueing networks that process time-sensitive jobs. Here, a job must be processed by potentially many queues in sequence to produce an output, and the service time at any queue is a function of the resources allocated to that queue. Since resources are costly, the end-to-end latency for jobs must be balanced with the overall cost of the resources used. While the number of queues is substantial, the latency function primarily reacts to resource changes in only a few, rendering the gradient sparse. We tackle this problem by introducing the Compressive Online Gradient Optimization framework which allows compressive sensing methods previously applied to stochastic optimization to achieve regret bounds with an optimal dependence on the time horizon without the full problem dimension appearing in the bound. For specific algorithms, we reduce the samples required per gradient estimate to scale with the gradient's sparsity factor rather than its full dimensionality. Numerical simulations and real-world microservices benchmarks demonstrate CONGO's superiority over gradient descent approaches that do not account for sparsity."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"online convex optimization",
"compressive sensing",
"regret analysis"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/c4b9c0c8788c84bb46d83b64baf3b52f9afccc08.pdf"
},
"presentation": null,
"primary_area": {
"value": "optimization"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/8dc356dc4be7d569d39c292986c48bc26a902d97.zip"
},
"title": {
"value": "CONGO: Compressive Online Gradient Optimization"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
4BYzyGKIcb | Sharpness-Aware Geometric Defense for Robust Out-Of-Distribution Detection | main | Active | Robust out-of-distribution detection;Adversarial training;Sharpness-aware minimization | alignment, fairness, safety, privacy, and societal considerations | 3;3;6 | 4;5;4 | 2;2;2 | 2;1;2 | 1;2;3 | 4 | 4.333333 | 2 | 1.666667 | 2 | -0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N / A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weaknesses"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper presents a novel angle by examining OOD detection under potential adversarial attacks - a scenario that has received limited attention.\n- The experimental evaluation is comprehensive and thorough."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper argues that adversarial examples should be classified as in-distribution samples rather than outliers. The authors imagine out-of-distribution (OOD) detection scenarios where input data may be subject to adversarial attacks. They demonstrate that their proposed method maintains robust OOD detection performance even when the data contains adversarial perturbations. The authors achieve strong experimental results by incorporating several established techniques in their approach."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "## Should adversarial examples be classified as in-distribution samples rather than outliers?\nIt is clear that by adding adversarial perturbation, the distribution shifted, why it still should be in-distribution?\n\n## About the scenario\nWhat are some real-world applications where out-of-distribution detection must handle potentially adversarially attacked images? \n\n## About the Contribution\nThe contribution of this work should be carefully justified. Most of the subsection in section 3 are existing methods. In the introduction section, the authors claim that the smoother regularizer introduced in 3.3 is the key contribution, but I do not agree that this intuition based smoothing regularizer is enough to let this paper be accept.\n\nGiven the limited practical relevance of the scenario and modest contributions, I recommend rejection."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1.\tIs the hyperspherical geometry learning a type of adverbial defense method? I did not see the related description in the section “ADVERSARIAL DEFENSES”.\n2.\tWhy not use other attack methods to perform adversarial training?\n3.\tHow to obtain the class prototype $\\mu_k$ ?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.\tThe authors investigate various adversarial attacks on different OOD detection approaches. Extensive experiments demonstrate the effectiveness of the proposed method.\n2.\tThey introduce Jitter-based perturbation in adversarial training to extend the defense ability against unseen attacks.\n3.\tThey employ Multi-Geometry Projection (MGP) and Riemannian Sharpness-aware Minimization (RSAM) for the OOD detection."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose a sharpness-aware method for improving OOD detection in adversarial training. Specifically, a multi-geometry projection network is trained to extract the hypersphere and hyperbolic features using jitter-based adversarial samples. Moreover, the network is optimized by sharpness-aware loss minimization using RSAM. Extensive experiments demonstrate the effectiveness of the proposed method. However, I have some concerns about this paper. My detailed comments are as follows."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tMy first concern is the reasonability of the research setting. The paper presents a method to classify adversarial examples as in-distribution (ID) samples in the context of out-of-distribution (OOD) detection. However, I find the rationale for this setting questionable for two main reasons:\n* Adversarial examples, by design, deviate significantly from the natural data distribution, even if they remain close in image space. Treating them as OOD samples aligns with standard OOD detection objectives, as these samples no longer represent the semantic consistency of ID data.\n* Detecting adversarial examples as OOD is practically advantageous, as it helps prevent their influence on model predictions. For most applications, identifying adversarial samples as OOD is a more effective way to mitigate potential risks, while treating them as ID can increase vulnerability to attacks.\n\n2.\tThe novelty of the methodology is limited. The proposed method appears to be a fusion of the MMEL approach [1] and the RSAM technique [4], denoted as MPG and RSAM, respectively, within the present paper.\n3.\tThe motivations of the introduction for the three components in the approach are not clear. Why do you use MGP, RSAM and Jitter-based perturbation?\n4.\tThe content of Figure 2 appears to have been adapted from Figure 1 in the referenced paper [1].\n5.\tThe significance of the sharp loss landscape seems to be self-evident, as it has been extensively explored in the existing literature [2, 3]. Regrettably, I fail to discern any novel contribution from the current paper in this regard.\n\n[1] Learning Multi-Manifold Embedding for Out-Of-Distribution Detection\n\n[2] Sharpness-Aware Minimization for Efficiently Improving Generalization\n\n[3] Detecting Adversarial Samples through Sharpness of Loss Landscape\n\n[4] Riemannian SAM: Sharpness-Aware Minimization on Riemannian Manifolds"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Could the authors analyze the computational complexity of computing the OOD score, and how does it scale with the size of the dataset? Additionally, can the authors provide insights into how the number of in-distribution (ID) training samples affects the performance of the method?\n\n2. In practical applications, selecting the threshold $\\lambda$ for the OOD score can be challenging. Could the authors elaborate on the procedure for choosing an optimal threshold, especially under varying dataset conditions and deployment scenarios?\n\n3. Could the authors design adaptive attacks that directly target the proposed OOD scoring mechanism and evaluate the proposed defense against such adaptive attacks?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. It introduces a novel sharpness-aware method for improving OOD detection in adversarial training. The proposed method investigates the combination of Riemannian geometries under adversarial conditions. This expansion of geometry space sharpens the proposed defense against adversarial attacks and avoids reliance on large OOD datasets for auxiliary training.\n\n2. The proposed SaGD sets a new SoTA for OOD detection, excelling in $FPR_{95}$ and AUC metrics, both with or without attacks.\n\n3. It performs ablation experiments to analyze the relations between the minimization of a sharp loss landscape and OOD detection performance under various adversarial conditions."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a robust method for out-of-distribution (OOD) detection that effectively separates adversarial in-distribution (ID) samples from OOD ones. It introduces the Sharpness-aware Geometric Defense (SaGD) framework, which smooths the irregular adversarial loss landscape within the projected latent space. By improving the convergence of geometric embeddings, the framework enhances the characterization of ID data, strengthening OOD detection in the presence of adversarial attacks. Additionally, the use of jitter-based perturbations in adversarial training expands the defense against unseen threats. Experimental results demonstrate that the SaGD framework achieves significant improvements in false positive rate (FPR) and area under the curve (AUC) compared to state-of-the-art methods, particularly in distinguishing CIFAR-100 from six other OOD datasets under various attack scenarios."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. It should provide a detailed analysis of the computational complexity involved in computing the OOD score. Additionally, it is important to examine how the number of in-distribution (ID) training samples affects the performance of the OOD score, as this can influence the scalability and generalizability of the approach.\n\n2. Choosing an appropriate threshold $\\lambda$ for the OOD score can be challenging in real-world applications. The paper should include a clear, practical procedure for determining this threshold to ensure consistent performance across diverse datasets and scenarios.\n\n3. To thoroughly validate the robustness of the proposed defense, it should incorporate adaptive attacks specifically designed to exploit the OOD scoring mechanism. Following the recommendations in [1], it should evaluate the effectiveness of the defense against these adaptive attacks to demonstrate its resilience under targeted adversarial conditions.\n\n[1] Tramer, Florian, et al. \"On adaptive attacks to adversarial example defenses.\" Advances in neural information processing systems 33 (2020): 1633-1645."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024sharpnessaware,\ntitle={Sharpness-Aware Geometric Defense for Robust Out-Of-Distribution Detection},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=4BYzyGKIcb},\nnote={under review}\n}"
},
"abstract": {
"value": "Out-of-distribution (OOD) detection ensures safe and reliable model deployment. Contemporary OOD algorithms using geometry projection can detect OOD or adversarial samples from clean in-distribution (ID) samples. However, this setting regards adversarial ID samples as OOD, leading to incorrect OOD predictions. Existing efforts on OOD detection with ID and OOD data under attacks are minimal. In this paper, we develop a robust OOD detection method that distinguishes adversarial ID samples from OOD ones. The sharp loss landscape created by adversarial training hinders model convergence, impacting the latent embedding quality for OOD score calculation. Therefore, we introduce a **Sharpness-aware Geometric Defense (SaGD)** framework to smooth out the rugged adversarial loss landscape in the projected latent geometry. Enhanced geometric embedding convergence enables accurate ID data characterization, benefiting OOD detection against adversarial attacks. We use Jitter-based perturbation in adversarial training to extend the defense ability against unseen attacks. Our SaGD framework significantly improves FPR and AUC over the state-of-the-art defense approaches in differentiating CIFAR-100 from six other OOD datasets under various attacks. We further examine the effects of perturbations at various adversarial training levels, revealing the relationship between the sharp loss landscape and adversarial OOD detection. The implementation code will be released upon paper acceptance."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Robust out-of-distribution detection",
"Adversarial training",
"Sharpness-aware minimization"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/416dcca21b78d159da85c322b70436c1de8cec45.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/43d1925dd16cd0d04272fed80b84aa2b4b2caa43.zip"
},
"title": {
"value": "Sharpness-Aware Geometric Defense for Robust Out-Of-Distribution Detection"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
4CFVPCYfJ9 | Does Vector Quantization Fail in Spatio-Temporal Forecasting? Exploring a Differentiable Sparse Soft-Vector Quantization Approach | main | Active | spatio-temporal forecasting;vector quantilization;sparse regression;differentiable;soft | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 5;5;6;6 | 4;4;5;3 | 2;2;3;3 | 2;2;4;3 | 3;1;4;3 | 5.5 | 4 | 2.5 | 2.75 | 2.75 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. What is the motivation of using vector quantization into spatiotemporal prediction?\n2. What is the significance of theoretical analysis in Chapter 4? Is this theoretical analysis related to video prediction?\n3. What are the ''improvement'' in Table 1.2.3. refer to, SimVP?\n4. Can the method proposed in the paper be compared with diffusion based models? For example, ExtDM: Distribution Extrapolarization Diffusion Model for Video Prediction, CVPR2024. What are the differences between these two methods, e.g., their application scenarios or efficiency?\n5. Why are there different types of comparison results between WeatherBench-S and WeatherBench-M in Tab. 1 (Total Cloud Cover in WeatherBench-S and Wind UV in WeatherBench-M)? Why not compare the same subjects? What are the differences in SVQ performance across different physical quantities and data scales?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper shows originality through the development of SVQ, a novel combination of sparse regression and soft vector quantization for spatio-temporal forecasting with theoretical analysis. Empirical validation is extensive, with experiments on multiple real - world datasets and comparisons to existing methods, achieving state-of-the-art results and validating the method's effectiveness and quality. It has potential applications in various domains and can inspire future research, opening new avenues for exploration and providing insights for model development."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper identifies the limited performance of traditional vector quantization (VQ) in spatiotemporal forecasting due to non-differentiability and limited representation power. It proposes Differentiable Sparse Soft - Vector Quantization (SVQ), which approximates sparse regression with a two-layer MLP for differentiability and uses Soft-VQ with sparse regression to capture patterns and filter noise. Empirical results on multiple datasets show SVQ achieves state-of-the-art performance, is versatile as a plug-in, and effectively balances detail and noise reduction. Ablation studies and additional analyses confirm its key components’ importance and its robustness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The novelty of the approach is limited, and the performance improvement appears marginal. Furthermore, there is no discussion on the additional computational overhead that these slight performance enhancements (such as those observed in wind speed and humidity in Table 1, and in the KTH dataset in Table 2, as well as in Table 18) will incur. A more detailed analysis of the computational overhead introduced by SVQ, compared to baseline methods, is needed, especially for cases where the performance gains are smaller.\n\n2. The motivation is unclear. It is not surprising to use huge codebook and sparse representation to improve the effect, because huge codebook itself brings a lot of extra parameter redundancy. So why use vector quantization? Because in other fields (e.g., video compression, video generation), VQ is to compress redundant information, not to add redundant information. You could further explain their rationale for using vector quantization in this context, given its typical use for compression in other fields.\n\n3. The theoretical analysis provided seems unrelated to the content of the article. Furthermore, the article fails to discuss the relationship between the information or features extracted after compression using SVQ and the original spatiotemporal data. Consequently, there is a notable lack of corresponding theoretical discussion. A more in-depth exploration of how the features learned through SVQ are related to the original spatiotemporal data would greatly aid in fully elucidating the mechanism underlying SVQ.\n\n4. The ablation study conducted is insufficient. There is no doubt that setting the code size to 10000 will yield better performance compared to 1000. A more detailed discussion of the trade-offs involved (such as efficiency, convergence, etc.) with larger code sizes would be helpful.\n\n5. The introduction of redundant over-complete codebooks and additional computational overhead has resulted in a lack of discussion on computational efficiency, speed, and complexity, among other factors. Empirical measurements of training and inference times, memory usage, and computational complexity, as a function of codebook size, would provide a more comprehensive illustration of the advantages of SVQ."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.Will the SVQ module added as a plug-in to the model have similar performance improvements for other tasks (such as image generation or natural language processing)? What are the applicable application scenarios?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The manuscript proposes a differentiable sparse soft vector quantization (SVQ) method, which is the first vector quantization method applied to spatiotemporal prediction and shows significant improvement.\n2. The SVQ method proposed in the manuscript has achieved leading performance in multiple real-world spatiotemporal prediction tasks, significantly reducing errors on multiple benchmark datasets, such as reducing errors by 7.9% on the WeatherBench dataset.\n3. The SVQ proposed in the manuscript can be seamlessly integrated into different types of spatiotemporal prediction models as a plug-in, and has improved performance in various architectures, demonstrating the versatility of the method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Vector quantization (VQ) is insufficient in improving the accuracy of spatiotemporal prediction. This paper introduces differentiable sparse soft vector quantization (SVQ) that can strike a balance between detail preservation and noise reduction, providing a solid foundation for full differentiability and sparse regression. Empirical studies on five spatiotemporal benchmark datasets show that SVQ achieves the best results, including a 7.9% improvement on the WeatherBench-S temperature dataset, a 9.4% average MAE reduction in video prediction benchmarks (Human3.6M, KTH, and KittiCaltech), and a 17.3% improvement in image quality (LPIPS)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.The SVQ method proposed in the manuscript still requires a lot of computing resources, especially in the case of high-dimensional data and large-scale codebooks.\n2.The comparison methods cited by the author in Tables 1 and 2 are only up to date in 2022, and lack comparisons of the latest methods in the past two years."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "none"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weakness"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The proposed Differentiable Sparse Soft-Vector Quantization (SVQ) method represents a novel advancement in vector quantization techniques."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a novel Differentiable Sparse Soft-Vector Quantization (SVQ) method, which integrates sparse regression with differentiability to tackle optimization challenges in vector quantization. This approach aims to enhance representation capacity in spatio-temporal forecasting tasks, marking a significant advancement in the field. While the SVQ method presents an innovative approach to vector quantization, the paper would benefit from clearer connections between theory and practice, updated baselines, deeper integration insights, and improved mathematical clarity to strengthen its contributions to the field.\nSoundness:2"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The method does not adequately demonstrate how the theoretical advantages of sparse regression are translated into tangible improvements in quantization performance. While the authors discuss optimization strategies, they fail to provide a clear connection between these theoretical claims and the practical outcomes. A more comprehensive explanation of how these optimization techniques directly enhance quantization would bolster the credibility of their approach.\n\tIn the experimental section, I noticed that the baseline models employed are relatively outdated. Given the recent advancements in spatio-temporal forecasting, particularly the emergence of various diffusion-based methods that have demonstrated significant improvements in predictive performance, it would be beneficial for the authors to consider incorporating these state-of-the-art models as baselines. This would provide a more comprehensive evaluation of the proposed method's effectiveness and advantages.\n\tThe explanation of the quantization module's implementation lacks depth regarding its integration with the overall spatio-temporal forecasting model. While the authors outline the architecture and components involved, they do not provide sufficient details on how the quantization process interacts with other model elements or influences the final forecasting results. A more thorough exploration of these interactions would enhance the clarity and applicability of their proposed method.\n\tIn Section 4, the mathematical proof lacks clarity in the notation used, which may hinder readers' understanding. For example, what’s the meaning of g' after Eq. (8)? Additionally, the proof does not establish a strong connection to the problem being addressed. I recommend revising this section to improve the clarity of the symbols and to explicitly link the proof to the main objectives of the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. The paper emphasizes the advantages of the proposed method in spatio-temporal forecasting, but VQ is also widely used in generative tasks (e.g., VQ-VAE). Could this method be applied to such tasks?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Innovative Approach: The paper effectively combines sparse regression with differentiable quantization, addressing the non-differentiability and limited representational power of traditional VQ. Using MLP to approximate sparse regression allows the model to capture complex patterns efficiently. \n2. Simplicity and Effectiveness: The proposed method is intuitive and easy to implement, with straightforward derivations and motivations. It demonstrates significant improvements across multiple tasks and models. \n3. Comprehensive Experiments: The paper provides detailed evaluations of the proposed quantization mechanism, including ablation studies and supplementary materials that address key questions. The well-designed visualizations offer excellent insights into the behavior and strengths of SVQ."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses the limitations of traditional Vector Quantization (VQ) and demonstrates impressive performance in spatio-temporal forecasting. SVQ uses a two-layer MLP to approximate sparse regression, reducing computational complexity while maintaining the flexibility to map each input vector to multiple codebook vectors. This soft quantization captures the complex dynamics of spatio-temporal data, preserving essential information while minimizing noise. The experiments confirm that SVQ is an efficient and expressive quantization mechanism applicable to various forecasting tasks. The visualizations provide valuable insights into the behavior and advantages of SVQ."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Visual Layout: Perhaps due to space constraints, the layout of the figures and tables could be more aesthetically pleasing."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Does Vector Quantization Fail in Spatio-Temporal Forecasting? Exploring a Differentiable Sparse Soft-Vector Quantization Approach"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024does,\ntitle={Does Vector Quantization Fail in Spatio-Temporal Forecasting? Exploring a Differentiable Sparse Soft-Vector Quantization Approach},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=4CFVPCYfJ9},\nnote={under review}\n}"
},
"abstract": {
"value": "Spatio-temporal forecasting is crucial in various fields and requires a careful balance between identifying subtle patterns and filtering out noise. Vector quantization (VQ) appears well-suited for this purpose, as it quantizes input vectors into a set of codebook vectors or patterns. Although vector quantization (VQ) has shown promise in various computer vision tasks, it surprisingly falls short in enhancing the accuracy of spatio-temporal forecasting. We attribute this to two main issues: inaccurate optimization due to non-differentiability and limited representation power in hard VQ. To tackle these challenges, we introduce Differentiable Sparse Soft-Vector Quantization (SVQ), the first VQ method to enhance spatio-temporal forecasting. SVQ balances detail preservation with noise reduction, offering full differentiability and a solid foundation in sparse regression. Our approach employs a two-layer MLP and an extensive codebook to streamline the sparse regression process, significantly cutting computational costs while simplifying training and improving performance. Empirical studies on five spatio-temporal benchmark datasets show SVQ achieves state-of-the-art results, including a 7.9\\% improvement on the WeatherBench-S temperature dataset and an average MAE reduction of 9.4\\% in video prediction benchmarks (Human3.6M, KTH, and KittiCaltech), along with a 17.3\\% enhancement in image quality (LPIPS). Code is publicly available at https://anonymous.4open.science/r/SVQ-Forecasting"
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"spatio-temporal forecasting",
"vector quantilization",
"sparse regression",
"differentiable",
"soft"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a1f86e0c6b447a94bd16170f8d5d0744ba037a6f.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Does Vector Quantization Fail in Spatio-Temporal Forecasting? Exploring a Differentiable Sparse Soft-Vector Quantization Approach"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
4CR5Uc9EYf | EraseDiff: Erasing Data Influence in Diffusion Models | main | Active | machine unlearning;diffusion model | alignment, fairness, safety, privacy, and societal considerations | 3;3;5;5 | 2;3;2;3 | 2;2;2;2 | 2;1;2;2 | 2;1;3;3 | 4 | 2.5 | 2 | 1.75 | 2.25 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": {
"value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors."
}
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1) Line 205: It is not clear what this expression means. Does it mean \\( \\nabla_{\\phi = \\theta} = L(\\phi, D_f) = 0 \\)? Or is it for some \\(\\phi\\) which can be reached by an optimization algorithm after initializing the parameters at \\(\\theta\\)? In that case, would that depend on the optimization algorithm used, number of steps, random seed etc?\n\n2) Related to above, what is the theoretical justification for formulating this optimization problem? How does it relate to traditional definitions of unlearning?\n\n3) For the experiments (in Table 1 and 2), were they run over multiple random seeds? If yes, could the error bars and standard deviations be reported? Without those, it is hard to judge the significance of the results. For example, Line 399 mentions ‘there is a decrease in recall (diversity)’ when comparing EraseDiff to SA. However, error bars would be required to judge the scientific significance of this statement.\n\n4) Related to above, the experiments do not seem to show improvement in quality of generated images or ability to forget compared to some baselines. For Table 1, SA outperforms EraseDiff in both FID and \\( P_{\\psi}(y = c_f | x_f) \\) while they are nearly equal in precision and recall. For Table 2, ESD has a lower FID and nearly equal CLIP score.\nMinor comments (did not affect rating):\n\n5) Line 234: For clarity and completeness, showing the steps of how Eq 5 can be formulated as Eq 6 (using Liu et al.) might be better. This can be done in the appendix if space is a constraint."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1) The problem is important and relevant to machine unlearning.\n\n2) The proposed method is novel in its approach.\n\n3) Experiments consider interesting, relevant tasks, and the proposed method demonstrates a reduction in computation time compared to competitive baselines.\n\n4) Figure 2 illustrates EraseDiff empirically reduces gradient conflict on the CIFAR 10 dataset."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposes an algorithm for unlearning in diffusion models. Unlike prior work, which formulate the optimization problem as minimizing a sum of two losses - one for the remember set and one for the forget set - this work proposes a bi-level optimization problem. They derive the parameter update rule for this optimization problem. Experiments are performed on three tasks which demonstrate class and concept wise forgetting with mixed results."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "See questions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Questions inserted in \"Weaknesses\"."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Pros:\n- Well-motivated problem\n- Clear problem formulation\n- Rigorous theoretical approach with pareto-optimal guarantees.\n- Comprehensive review of the literature relative to proposed method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper addresses the problem of unlearning specific data influences in\ngenerative models (here, diffusion models) to mitigate privacy risks\nassociated with data memorization. EraseDiff, the proposed methods, frames\nunlearning as a constrained multi-objective optimization problem that\nallows the model to retain its performance on preserved data while\nselectively removing the influence of data flagged for deletion. This\napproach involves adjusting the model’s generative process to diverge from\nthe typical denoising procedure when encountering the data to be forgotten\nby choosing a distribution different from the standard normal distribution\nused for the rest of the dataset. A first-order optimization method is\nproposed to address the computational complexity inherent in diffusion\nprocesses. Extensive experiments and comparisons with existing algorithms\nindicate that EraseDiff maintains the model’s utility while achieving\neffective and efficient data unlearning."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Cons:\n- Could benefit from additional experimentation in some aspects.\n- Some parts need to be clearly explained.\n- Performance not very different from one of the baselines.\n\nThe training objective part is not very clear. Equation (1) is fine. But\nequation (2) is says epsilon is sampled from normal distribution, but later\nin the equation, epsilon_f is mentioned. What is the relationship of\nepsilon_f with epsilon? Subsequently, it is also mentioned that epsilon_f\nis chosen to be a different from epsilon. What does \"This could be ...\"\nsentence mean? What was really used to confound the approximator? Does\nequation (4) correspond to a local minima?\n\nTable 1 does not really indicate that the EraseDiff is much better than the\nbaselines. Also, the authors only chose one class (airplane) for this line\nof experimentation. They could have considered more instances to show a\nmore comprehensive ealuation.\n\nFigure 3 and Table 2 indicate that SalUn performs very close to Erasediff,\nif not better in some aspects."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1.\tWhy the expectation in eq 2 also depends on $\\epsilon$? It does not have it anywhere.\n2.\tWhat is $\\phi_{init}$? Why can $L_f$ in equation (4) take one more undefined argument than the one defined in (2)?\n3.\tWhat exactly is the underlying “unlearning” problem? Do the authors aim to erase some concepts? If so, what is the problem formulation?\n4.\tWhat exactly is the design of $\\epsilon_f$?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The problem of machine unlearning for diffusion models is important."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors study the problem of machine unlearning problem for diffusion models. They propose an unlearning approach that exhibits better computational efficiency than prior works."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The clarity of the paper should be greatly improved.\n- The problem is not well and clearly defined until the experiment section.\n- Methodology-wise the contribution to prior works seems limited.\n\n## Detail comments\nWhile I agree that the authors study a very important and timely problem on machine unlearning for diffusion models, I found that the contribution and quality of the paper do not meet the bar of ICLR. Firstly, I find the clarity of the paper in general should be greatly improved, especially on the rigor of the notations. For instance, what exactly is $\\epsilon_f$ in equation (2)? It is never rigorously defined in the paper. Why $L_f$ in equation (4) can take an additional undefined argument $\\phi_{init}$ compared to equation (2)? Why the expectation in equation (2) depends on $\\epsilon$ when it does not appear anywhere else in equation (2)? What exactly is $\\epsilon_\\theta$ and what does the author mean by $\\epsilon_\\theta(x,t), \\epsilon_\\theta(x|c)$ and why is there multiple definitions of it? Note that I roughly get what the authors try to say but that is only because I am familiar with diffusion models. I feel the author should at least be rigorous in the definition of these basic terms as they are crucial for understanding the proposed method. \n\nAnother important issue is that the problem that the authors try to solve is never clearly well-defined. It is only clear to me until the experiment section that the authors want to modify the model so that it does not generate images pertaining to some labels or concepts. However, the way the authors introduce their method makes me feel that they aim to remove the influence of $D_f$ defined by certain labels or concepts to the model. Note that these two problems are very different, and I feel the authors do not convey clearly which goal they are trying to achieve. This also dulls the intuition and the reason why the proposed method makes sense in the first place. \n\nIn summary, I feel the paper need at least a major revision and I hope the authors can take time to polish their paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Why is Eq. (2) of this form? Should it be maximized instead?\n- What is the performance on other diffusion models other than the few ones listed?\n- (bonus) What is the relationship between solving for Eq. (5) and bilevel optimization?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The problem of unlearning for privacy and copyright considerations is significant.\n- Source code is provided.\n- The proposed EraseDiff method seems to be computationally friendly."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces EraseDiff, an unlearning method for diffusion models. The paper formulates the unlearning problem using a constraint optimization problem, which is approximated by a first-order method to solve. The paper compares EraseDiff with other methods using different images."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- No standard deviation is reported in the result tables.\n- The efficacy of the method empirically is not convincing. For example, using 99.91 to claim a win over 99.88 is not convincing in Table 3. \n- The loss function is not convincing. In the last paragraph of the introduction, the authors claim \"minimizing the loss over the remaining data while maximizing that over the forgetting data\". However, Eq. (2) is very similar to Eq. (1), and is still minimized in Eq. (3).\n- Line 200 states \"It is well known that\" but a reference is still needed, missing here.\n- The concept \"unlearning\" is not clearly defined, especially in the introduction part.\n- In Eq.(6), $a_t$ is not explained, there should be at least a sentence like \"for some fixed value $a_t$\".\n- Eq. (6) is not well motivated or explained.\n- It is unclear from Table 1 that EraseDiff leads the performance.\n- (minor) The authors do not need to submit a separate supplementary file as the appendix is already included in the main submission."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "An effective and efficient unlearning algorithm for diffusion generative models."
},
"_bibtex": {
"value": "@misc{\nwu2024erasediff,\ntitle={EraseDiff: Erasing Data Influence in Diffusion Models},\nauthor={Jing Wu and Trung Le and Munawar Hayat and Mehrtash Harandi},\nyear={2024},\nurl={https://openreview.net/forum?id=4CR5Uc9EYf}\n}"
},
"abstract": {
"value": "We introduce EraseDiff, an unlearning algorithm designed for diffusion models to address concerns related to data memorization. Our approach formulates the unlearning task as a constrained optimization problem, aiming to preserve the utility of the diffusion model on retained data while removing the information associated with the data to be forgotten. This is achieved by altering the generative process to deviate away from the ground-truth denoising procedure. \nTo manage the computational complexity inherent in the diffusion process, we develop a first-order method for solving the optimization problem, which has shown empirical benefits. Extensive experiments and thorough comparisons with state-of-the-art algorithms demonstrate that EraseDiff effectively preserves the model's utility, efficacy, and efficiency."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": {
"value": [
"~Jing_Wu6",
"~Trung_Le2",
"~Munawar_Hayat2",
"~Mehrtash_Harandi2"
]
},
"authors": {
"value": [
"Jing Wu",
"Trung Le",
"Munawar Hayat",
"Mehrtash Harandi"
]
},
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"machine unlearning",
"diffusion model"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": {
"value": "wu|erasediff_erasing_data_influence_in_diffusion_models"
},
"pdf": {
"value": "/pdf/5d4b48db4802cc7a1ad5eed7c2fc0e15666798ea.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/49c0ea145f1b4af5f9ea7d47192d550806adea2f.pdf"
},
"title": {
"value": "EraseDiff: Erasing Data Influence in Diffusion Models"
},
"venue": {
"value": "ICLR 2025 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Withdrawn_Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
4D0f16Vwc3 | ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing | main | Active | Mixture-of-Experts;Differentiable Routing;Sparsity | foundation or frontier models, including LLMs | 3;5;5;6;8 | 4;4;4;5;4 | 2;3;3;4;3 | 2;2;2;3;3 | 3;3;3;4;3 | 5.4 | 4.2 | 3 | 2.4 | 3.2 | 0.184637 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Why ReLU based router can enhance the domain specialization of MoE?\n\n- Whether the ReLU-based router can be further enhanced with shifted-ReLU, even with a learnable threshold?\n\n- How does the stage I/II affects the training performance?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Various ablation studies and visualization are conducted, analysing the effectivenss of the proposed methods.\n\n- The method is clearly explained and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposed a new MoE architecture that replace the TopK with ReLU in the routing modules. The ReLU-MoE is then optimized in a three stage training framework, experiments demonstrates the effectiveness of the proposed method across language modeling on different model sizes."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The novelty of ReMoE is modest, as it simply replaces the TopK operation with ReLU.\n\n- The perplexity improvements over dMoE are not significantly. whether downstream performance, such as on common-sense reasoning tasks, would shows significant enhancement?\n\n- In practical MoE implementations, an all-to-all dispatch and combine approach can be used to assign each token to the appropriate experts, thereby reducing memory usage and computational load. However, during the initial stage of training, the absence of high sparsity can impact memory and computation efficiency, as this phase effectively requires training a nearly dense model."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "One of the benefits of ReLU routing is that there is no inherent requirement for each token to be processed by $k$ experts. \n\n- Can ReMoE leverage the flexibility it is said to have with the regularization terms used?\n\n\n\n\n\n**Suggested writing changes:**\n- Scaling in parameters N --> Scaling in active parameters N"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper is generally well-written and easy to understand. In particular, the method section is thorough and gives a comprehensive explanation of the modifications made to the standard MoE training pipeline. I quite like Figure 4 as it gives a precise picture of the training dynamics specific to ReMoEs.\n- The proposed method outperforms all other methods compared to in the study.\n- Experiments in Figure 6 studying performance improvements for different numbers of active parameters, different expert granularity, and a varying number of experts demonstrate that the proposed ReLU MoEs along with their training algorithm consistently outperforms a standard TopK MoE.\n- Using the ReLU activation function as a replacement to TopK + softmax for MoE routing is a novel and potentially interesting idea. The method could allow for improved conditional computation as there is no hard requirement for each token to be processed by exactly $k$ experts."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a new routing method for MoE transformers and provides an accompanying training recipe. The routing method (ReMoE) replaces softmax + TopK expert selection with a ReLU activation function. This results in a fully differentiable MoE transformer, which requires a new loss penalty to encourage a balanced load and a reasonable number of active experts ($k$). The authors empirically evaluate their new method in the context of autoregressive language modeling. Experiments include a study of performance w.r.t loss at different model sizes, a varied number of experts, and different expert granularity; an analysis of different training stages; and an analysis of routing behavior."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My **greatest concern** regards the attribution of the success of the method. As the paper is currently written, it suggests that using the ReLU activation function in place of TopK + softmax is the cause of the improved performance because it makes the routing function *differentiable*. However, the training algorithm is also changed. Notably, during the first ~100 steps of training the ReMoE has sparsity as low as 20% (Figure 4 (a)), requiring substantially more memory and computational cost for these first 100 steps. This leads me to ask the following question:\n\n*Is the success of the method due to the use of ReLU or is it due to the expensive near-dense training of the MoE for the first 100 steps?*\n\nA simple way to address this weakness would be to provide a dMoE baseline that trains with k=int( 0.8 * E ) (e.g., nearly dense) for the first 100 steps of training and switches to k=1 thereafter. \n\n\n**Other weaknesses**\n- While the ReMoE method allows for more flexible routing, it could unevenly distribute the load across a sequence, causing latency issues during autoregressive generation. How does the allocation of compute vary across the sequence of tokens?\n- In the introduction, you state \"the vanilla TopK router introduces a discrete and nondifferentiable training objective (Shazeer et al., 2017; Zoph et al., 2022), limiting the performance\". This is false. The training objective (e.g. auxiliary loss), itself, is differentiable. The difficulty is related to receiving a sparse gradient (e.g., a gradient for only a subset of activated experts).\n- Recent relevant works [1-3] are not mentioned in the related work section and not compared to in the main manuscript. Specifically, the sparse-mixer-v2 method, which improves the router's gradient estimate would be a relevant baseline to compare with.\n- I miss an evaluation of the performance of ReMoEs on LM evaluation benchmarks.\n- A batch size of 512k tokens is small for LLM pre-training. \n- While authors claim to train on a compute-optimal number of tokens, is 30B compute optimal for all models in the study? Many recent LLMs train well beyond the compute-optimal regime (e.g. Llama3 8B was trained for more than 50x compute optimal). Do the ReMoE results hold for longer training?\n- ReMoEs will result in high memory consumption early on in training. This is reasonable for smaller models, but can quickly become expensive for very sparse MoEs. This should be explicitly noted in section 3.5 line 269.\n\n\n\n\n\n[1] SPARSE BACKPROPAGATION FOR MOE TRAINING, https://arxiv.org/pdf/2310.00811\n\n[2] GRIN: GRadient-INformed MoE https://arxiv.org/abs/2409.12136\n\n[3] Dense Training, Sparse Inference: Rethinking Training of Mixture-of-Experts Language Models https://arxiv.org/pdf/2404.05567\n\n\n\n**I would be happy to raise my score if some concerns are addressed.**"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See above."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is well-written and authors have presented some interesting ablations like domain specialization of experts, assign a higher number of experts to rarer tokens, etc.\n\n2. Experimental results presented looks promising. It is interesting to see how a relatively simple idea work so much better. \n\n3. Three-stage training that naturally occurs during REMoE training is an interesting section. I recommend author to add the loss plots too in the figure to draw parallels."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduce ReMoE, a differentiable MoE architecture that incorporates ReLU routing in replacement for TopK routing for Mixture-of-Experts. This paper further propose methods to regulate the router’s sparsity while balancing the load among experts. ReMoE enables efficient dynamic allocation of computation across tokens and layers, exhibiting domain specialization. The paper observes a natural three-stage training process in ReMoE: warm-up stage, sparsifying stage, and stable stage offering interesting insights. Perplexity based experiments demonstrate that ReMoE consistently outperforms vanilla TopK-routed MoE across various model sizes, expert counts, and levels of granularity."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While there are many interesting experiments and ablation in the paper, I have several comments.\n\n1. The authors have failed to provide the training cost of different baselines used in the experiments and also for inference. Clearly, ReMoE stage I training activates a lot more experts during initial stage. Some speed analysis is must to clearly identify the computation overhead of dynamic expert selection both from inference and training perspective. A more detailed analysis of the trade-offs between performance gains and computational costs would be beneficial.\n\n2. Complete evaluation setup of paper is centered around training and validation loss, and perplexity which is completely not reliable to justify performance. To illustrate the performance benefit of ReMoE, it is critically important to show results translating to downstream tasks.\n\n3. TopK routing facilitates better parallelization opportunities while scaling MoEs. What are the authors thoughts on adopting a dynamic expert selection which can lead to poor GPU utilization.\n\n4. The authors introduces two new hyperparameters $\\lambda_{0}$ and $\\alpha$. MoE training is well-known to be tricky. I am concerned how these hyperparameters will further increase challenges. The authors have indeed provided some ablation on 20k\nsteps (∼10B tokens) for 182M token, but it is not clear how the observations replicate in large MoEs. In addition, even for this small-scale experiment, some $\\alpha$ values lead to rapid regularization changes and excessive oscillation. \n\n5. While the paper compares ReMoE with several baselines, it would be beneficial to include comparisons with other recent MoE variants, such as those addressing the routing bottleneck or improving expert utilization.\n\n6. Domain specialization experiments are interesting. I would recommend authors to conduct some data domain vocabulary coverage ratio related experiments for each experts to complete the full-picture.\n\nI am looking forward to additional experiments in rebuttal to update my rating."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "- What does $N$ refers to? If I understand correctly, $N$ refers to the number of activated parameters. In that case, the largest model should have about 7/8 billion parameters in total, right? I suggest the authors to explicitly mention the important hyper-parameter like the model size.\n- The Figure 7 is really interesting and insightful. But I'm wondering whether the router will assign different number of experts solely based on token ids (or token frequency), or it will also capture some semantic informaton (i.e. difficulty of tasks)."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- The authors propose a simple, intuitive, and straightforward alternative method for MoE routing, which can be integrated into existing MoE structures via a drop-in approach. \n- The submission clearly illustrates the background information and the proposed method. \n- The performance gain by replacing the traditional routing with ReLU is interesting.\n- The submission provides many insightful observation, for instance, the analysis on the correlation between expert allocation and token frequency, and the effect of load balance loss."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Traditional Mixture of Experts (MoE) models use Top-K routing to select experts, which is a non-differentiable process. In this work, the authors propose using ReLU activation combined with load balancing loss and sparsity loss to address this issue. Several validation experiments are conducted to demonstrate the effectiveness of the proposed approach."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The experiment scale is relatively small, and only perplexity is reported. I'm wondering about the performance comparison on downstream tasks.\n- The performance improvement seems significant, but it would help if the authors can provide more analysis and experiment/theoritical evidence on why the fully differentiable router will help."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Why is TopK routing struggling beyond just \"being discontinuous\" and how is the new strategy overcoming this? Is it really the continuity of the routing or the more flexible nature of the sparse regularization which allows for multiple \"phases\" of training?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Mixture-of-experts models are a powerful new paradigm which can unlock favorable scaling. However, training such models is difficult, in large part due to the discrete nature of the TopK expert routing. The proposed routing method alleviates this difficult which enables better capabilities in MoE models. The nature of this solution is conceptually clean and gives good results. The paper is well-written and provides a useful empirical analysis."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a new routing method for MoE routing which addresses issues related to discontinuity inherent in TopK routing while still maintaining sparsity. The key idea is to replace TopK softmax with a ReLU routing function. This leads to a natural selection of experts from the non-zero routing scores. A sparse selection of experts is achieved via L1-regularization which evolves dynamically throughout training in order to eventually reach a target sparsity. The L1-regularization can also be combined with a load balancing loss to ensure both sparse selection of experts as well as an even distribution of token routing."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "It would be helpful to delve a bit more into the conceptual intuitions of the regularization penalty, especially with the additional load balancing and the dynamic penalty adjustment. This will make it easier for the reader to grasp the key conceptual contribution. Additionally, it would be helpful to shed some light on exactly what issue is being overcome with the new strategy."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose ReMoE, a fully differentiable MoE with ReLU routing. ReMoE consistently outperforms vanilla TopK-routed MoE and exhibits superior scalability w.r.t. the number of experts."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024remoe,\ntitle={ReMoE: Fully Differentiable Mixture-of-Experts with Re{LU} Routing},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=4D0f16Vwc3},\nnote={under review}\n}"
},
"abstract": {
"value": "Sparsely activated Mixture-of-Experts (MoE) models are widely adopted to scale up model capacity without increasing the computation budget. However, vanilla TopK routers are trained in a discontinuous, non-differentiable way, limiting their performance and scalability. \nTo address this issue, we propose ReMoE, a fully differentiable MoE architecture that offers a simple yet effective drop-in replacement for the conventional TopK+Softmax routing, utilizing ReLU as the router instead. We further propose methods to regulate the router's sparsity while balancing the load among experts. ReMoE’s continuous nature enables efficient dynamic allocation of computation across tokens and layers, while also exhibiting domain specialization. Our experiments demonstrate that ReMoE consistently outperforms vanilla TopK-routed MoE across various model sizes, expert counts, and levels of granularity. Furthermore, ReMoE exhibits superior scalability with respect to the number of experts, surpassing traditional MoE architectures."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Mixture-of-Experts",
"Differentiable Routing",
"Sparsity"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/6b26f5947c3c6afae4e5b2a3dce9d66e5b399e24.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/75748c23120ad49cc2d3b9f9a7c2f586f150748b.zip"
},
"title": {
"value": "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
4E0lCxBD0U | Decentralized Transformers with Centralized Aggregation are Sample-Efficient Multi-Agent World Models | main | Active | multi-agent reinforcement learning;world models;learning in imagination | reinforcement learning | 3;5;6;6 | 4;4;2;4 | 2;2;3;2 | 2;2;3;2 | 3;3;3;3 | 5 | 3.5 | 2.25 | 2.25 | 3 | -0.471405 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.\tConsidering that in SMAC all agents share the same environment reward, is the $r_t^i$ predicted by the \"Shared Transformer\" the same for each agent? If they are different, how is the team reward used to train the agents during the imagination phase? Is it averaged from $\\{r_t^i \\}_{i=1}^{N}$?\n2.\tBased on Question 1, did the authors consider having each agent learn a different reward while learning the world model, in order to address the credit assignment problem in MARL through the world model learning process?\n3.\tIn the training process of the world model, how are the trajectories used as labels obtained? Also, please discuss what should be done in complex scenarios when there is no good initial policy to generate the trajectories.\n4.\tPlease explain in detail the role of learning $\\gamma$ in the overall method, and why it cannot be replaced with a constant.\n5.\tHow is the codebook $Z$ initialized, and does the initialization affect the learning outcomes under different conditions?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.\tIn constructing the world model, the authors considered both centralized information and decentralized information.\n2.\tThe overall logic of the paper is coherent and easy to understand.\n3.\tThe paper conducted extensive experiments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Considering the inevitable challenges of both centralized and decentralized learning in developing a world model, this paper proposes MARIE (Multi-Agent auto-Regressive Imagination for Efficient learning), a Transformer-based approach that integrates both methods. The process is divided into three stages: the first involves collecting multi-agent trajectories, the second focuses on learning the world model from these experiences, and the third uses the world model for policy learning. The second stage, which centers on the learning of the world model, involves discretizing observations with the VQ-VAE method and using the learned discretized codes to construct a Transformer architecture for transition prediction. Additionally, the authors incorporate agent-wise aggregation information to mitigate non-stationarity. Experiments on SMAC and MAMujoco are conducted to validate the method's effectiveness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe learning results of the world model depend on the supervisory signals, specifically the trajectories generated by a superior policy used as labels. In complex scenarios, without trajectories produced by an optimal policy, it may be difficult to learn a complete dynamic transition."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Given that the global state is known, could it be directly used as the aggregated global feature? If analysis or experiments were performed to validate the effectiveness of the current agent-wise aggregation, it would be more convincing.\n- Since the policy relies on reconstructed observations, a deeper analysis of how errors in reconstructed observations impact final performance would be insightful.\n- \"The policies π are exclusively trained using imagined trajectories.\" Does this lead to wasted real experience collected during training?\n- I am curious about the prediction accuracy for discounts at each step. As the horizon (H) increases, can the model accurately predict the end of the game, and how does this affect performance?\n- Since MARIE separates model learning from policy learning, providing intuitive or experimental comparisons with methods that jointly learn the model and policy would increase the persuasiveness of the approach. For example, the following references could be useful:\n - [1] Benjamin Eysenbach, Alexander Khazatsky, Sergey Levine, and Ruslan Salakhutdinov. Mismatched no more: Joint model-policy optimization for model-based RL. Advances in Neural Information Processing Systems, 35:23230–23243, 2022.\n - [2] Raj Ghugare, Homanga Bharadhwaj, Benjamin Eysenbach, Sergey Levine, and Ruslan Salakhutdinov. Simplifying model-based RL: learning representations, latent-space models, and policies with one objective. arXiv preprint arXiv:2209.08466, 2022.\n- In the first ablation experiment, regarding learning local dynamics instead of joint dynamics, the authors state that “the scalability issue is exacerbated by a growing number of agents.” However, in Figure 4, the performance of learning joint dynamics degrades more with 3 agents (3s_vs_3z) than with 5 agents (2s3z). This seems inconsistent with the authors' claim.\n- In the third ablation study, it would be worth exploring the effect of not discretizing observations and instead learning a continuous embedding through a linear layer. Additionally, assessing the impact of policies that directly depend on the internal hidden states of the Transformer, rather than reconstructed observations, would be insightful. Intuitively, the policy input only needs to capture decision-relevant information, not a complete image reconstruction. Moreover, errors in reconstruction may negatively impact policy learning.\n- In Figure 7, the authors claim that MARIE “has remarkably better error,” but the actual curves do not seem to support the term “remarkably.” Providing a corresponding performance comparison curve would make this claim more visually intuitive.\n- Including a more comprehensive set of experimental results in Table 1 would enhance the paper.\n- Can the authors provide more details on the computational cost of using a Perceiver Transformer for centralized aggregation? How does this affect MARIE's scalability as the number of agents increases?\n- Could the authors clarify the role of intra-step autoregression and how it contributes to the overall performance of the model? A comparison between the Perceiver and other common aggregation methods would also be helpful.\n- The paper shows strong performance on simulation benchmarks, but adding results or discussions on how MARIE could be applied to real-world scenarios would increase its impact and relevance.\n- The Preliminary section could benefit from a brief introduction to the Perceiver and other aggregation techniques, making the paper more accessible to readers unfamiliar with these concepts.\n- Are there any limitations of MARIE that might make it less effective in certain multi-agent settings, such as environments with highly heterogeneous agents or asymmetric observation spaces?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The integration of decentralized local dynamics learning and centralized feature aggregation is well-motivated and effectively addresses key challenges in MARL, such as scalability and non-stationarity.\n2. The use of the Perceiver Transformer for centralized representation aggregation is an innovative contribution that facilitates efficient global information sharing between agents while maintaining scalability."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel framework, **MARIE (Multi-Agent auto-Regressive Imagination for Efficient learning)**, which leverages a Transformer-based world model to enable sample-efficient policy learning in **Multi-Agent Reinforcement Learning (MARL)**. MARIE addresses two key challenges in MARL: scalability and **non-stationarity**. The approach combines decentralized local dynamics learning with centralized feature aggregation using a Perceiver Transformer. The authors evaluate the proposed method on the Starcraft Multi-Agent Challenge (SMAC) and **MAMuJoCo**, demonstrating improved sample efficiency and performance over existing model-free and model-based methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Necessity of individual components**: The authors claim that this work is “the first pioneering Transformer-based world model for multi-agent systems,” but the underlying techniques—centralized feature aggregation, the Perceiver Transformer, and autoregressive modeling of discretized tokens—are already present in the literature. More ablation experiments to demonstrate the necessity of these components would strengthen the paper. It is necessary to investigate whether it is a kind of simple combinations of different techniques, or more reasonable design for MARL.\n2. **Limited comparison to existing Transformer-based world models**: While the paper compares its method with model-free and some model-based MARL approaches, a more in-depth exploration of existing Transformer-based methods in MARL, or related architectures from single-agent RL that could directly be extended to MARL (e.g., IRIS, TWM and other methods), is lacking. It would be beneficial to further discuss why existing single-agent Transformer-based approaches cannot be directly adapted to MARL."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "-"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- The paper presentation would benefit from increasing the font size of figures in the main text."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The proposed model combined decentralized dynamics modeling with centralized representation aggregation using Transformer sequence modeling.\n- The paper is well-written and easy to follow.\n- The authors provide the ablation results and the analysis of attention patterns to reveal the implicit decison-making features."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose Perceiver Transformer-based world model for MARL that addresses the actual problems of scalalability and the non-stationarity. The evaluations on SMAC and MAMujuco prove the model superiority over multiple baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper presentation could be improved with captioning figures of experimental results with short conclusions"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "### Questions\n\n1. Why were only four seeds used for evaluation, and are there plans to conduct more extensive statistical testing to validate the results? I understand that experiments can be expensive to run and lots of tasks were used thus I thoroughly recommend reading the RLiable paper to see how you can address this without increasing computational budget. I believe you could simply run the tests with existing results.\n\n2. Have you considered using SMAC v2, or is there a rationale for continuing with SMAC v1 despite its known flaws?\n\n3. Could you provide a direct comparison on environments beyond SMAC for methods like MAMBA and potentially CoDreamer?\n\n4. Given that the difference in compounding error of the world models between MAMBA and MARIE get worse over time, would the performance gap between the two be reduced if using a smaller imagination horizon for training and does this possibly bridge the gap?\n\n### Suggestions:\n\n- **Use of Benchmarks**: Including evaluations on more diverse environments, such as those in the updated SMAC version or other cooperative multi-agent benchmarks, would strengthen the paper’s claims.\n\n- **Statistical Validation**: Incorporating more seeds and employing robust statistical methods would add credibility to the results.\n\nIn conclusion, I don’t feel confident enough that the results presented truly indicate a statistically significant performance improvement and that the architecture itself doesn’t provide enough of a difference to warrant acceptance without the solid empirical proof that it is a superior method. If a) the RLiable evaluation methodology is run and the results present statistically significant improvements and b) MAMBA is run on MAMujoco with similar conclusions to SMAC then i will be willing to raise my score."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The idea is presented clearly, and the architecture and experimental setup are detailed well. The integration of decentralised and centralised components is relatively straightforward and understandable.\n\n- The submission provides comprehensive implementation details and references all the open-source repositories used for baselines, making it likely reproducible. The authors also mention that code will be released after the review process, which supports transparency.\n\n- The paper presents results that show improvements over baselines. The usage of SMAC and MAMujoco environments offers a broad view of the architecture’s capabilities i.e. both discrete and continuous action spaces.\n\n- The introduction of a Perceiver Transformer for centralised aggregation in a multi-agent context is an interesting approach and could provide valuable insights for the community."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces MARIE (Multi-Agent auto-Regressive Imagination for Efficient learning), a Transformer-based architecture designed to improve sample efficiency through improving the accuracy of multi-agent world modelling. The authors aim to address challenges of world modelling in MARL, particularly the scalability and non-stationarity issues, by using decentralised local dynamics combined with centralised aggregation through a Perceiver Transformer. The architecture/algorithm is evaluated on SMAC and additional experiments are conducted on MAMujoco, showing improved sample efficiency and overall performance compared to existing model-free and model-based methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The experiments lack rigourous statistical testing, which is critical given the limited number of seeds (only four). This raises concerns about the reliability and significance of the results. Referring to rigorous RL evaluation protocols such as those outlined in [rliable_code](https://github.com/google-research/rliable), [rliable_paper](https://arxiv.org/pdf/2108.13264) and [marl_eval](https://proceedings.neurips.cc/paper_files/paper/2022/file/249f73e01f0a2bb6c8d971b565f159a7-Paper-Conference.pdf) (among others) would have strengthened the empirical claims. These evaluation protocols have become a common standard that the community should uphold. Without statistical validation, it's hard to confirm that the reported improvements are statistically significant.\n\n- The use of SMAC, particularly SMAC v1, is problematic as it is an outdated benchmark with known serious flaws, see [SMACv2](https://arxiv.org/abs/2212.07489). The evaluation would benefit from using the updated SMAC v2 version, which addresses some of these issues and gives more credibility to the method. Furthermore, the absence of comparisons with MAMBA in environments beyond SMAC makes it difficult to comprehensively evaluate the advantages of MARIE over existing architectures.\n\n- While the architecture is interesting, the novelty might be overstated. There are similarities between MARIE and existing methods, such as MAMBA, with no stark performance difference (at least without the statistical testing, i don't believe we can make a fair claim that the difference is stark). Additionally, a recent approach, see https://openreview.net/forum?id=f2bgGy7Af7, using graph attention networks (GATv2) (which are essentially transformers in a sense) closely mirrors the methodology, questioning the novelty of MARIE's transformer-based aggregation. Not mentioning this in the related work section detracts from the contribution's novelty marginally. However, i will say this is the least important weak point considering that if the results were rigorously validated i would still believe this methodology is worth it for the community to see."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce the first Transformer-based multi-agent world model for sample-efficient multi-agent policy learning in its imaginations."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024decentralized,\ntitle={Decentralized Transformers with Centralized Aggregation are Sample-Efficient Multi-Agent World Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=4E0lCxBD0U},\nnote={under review}\n}"
},
"abstract": {
"value": "Learning a world model for model-free Reinforcement Learning (RL) agents can significantly improve the sample efficiency by learning policies in imagination. However, building a world model for Multi-Agent RL (MARL) can be particularly challenging due to the scalability issue in a centralized architecture arising from a large number of agents, and also the non-stationarity issue in a decentralized architecture stemming from the inter-dependency among agents. To address both challenges, we propose a novel world model for MARL that learns decentralized local dynamics for scalability, combined with a centralized representation aggregation from all agents. We cast the dynamics learning as an auto-regressive sequence modeling problem over discrete tokens by leveraging the expressive Transformer architecture, in order to model complex local dynamics across different agents and provide accurate and consistent long-term imaginations. As the first pioneering Transformer-based world model for multi-agent systems, we introduce a Perceiver Transformer as an effective solution to enable centralized representation aggregation within this context. Main results on Starcraft Multi-Agent Challenge (SMAC) and additional results on MAMujoco show that it outperforms strong model-free approaches and existing model-based methods in both sample efficiency and overall performance."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"multi-agent reinforcement learning",
"world models",
"learning in imagination"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/8ef4b28b3d20ddb0652bac50a7f9f1f2ddaf5447.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Decentralized Transformers with Centralized Aggregation are Sample-Efficient Multi-Agent World Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
4EjdYiNRzE | O(d/T) Convergence Theory for Diffusion Probabilistic Models under Minimal Assumptions | main | Active | score-based generative model;diffusion model;denoising diffusion probabilistic model;sampling | learning theory | 6;6;8 | 4;3;4 | 3;3;4 | 3;3;3 | 3;2;3 | 6.666667 | 3.666667 | 3.333333 | 3 | 2.666667 | 0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1.\tI wonder if the noise of the sampling process is pre-defined. More specifically, whether $Z_1,…, Z_T$ in (2.4) and (4.2) is exactly the same. In my opinion, if the noise of the ground-truth process and the approximated process are the same, the problem would become easier. Could the author discuss it in detail?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.\tThe result is really interesting since this work achieves better results without the assumption on the Jacobian of $s_t$."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work studies the sample complexity of diffusion models with a stochastic sampling process. By introducing two auxiliary sequences, they divide the discretization complexity and the approximated score error and achieve $O(d/T)$ convergence guarantee under a mild assumption on the score function."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tIt would be better to highlight the technical novelty compared to [1]. In fact, I think there are many points that are worth mentioning. More specifically, controlling the Jacobian of the ground-truth score and introducing two auxiliary sequences is the source to remove the Jacobian assumption (If I have any misunderstanding, please correct me.). The author can discuss them in detail to help readers to understand these papers.\n2.\tThe noise schedule is highly specific. Though this schedule has been used in some theoretical works [1][2][3], it would be better to discuss this schedule in a real-world setting.\n\n[1] Li, G., Wei, Y., Chi, Y., & Chen, Y. (2024). A sharp convergence theory for the probability flow odes of diffusion models. arXiv preprint arXiv:2408.02320.\n\n[2] Li, G., Wei, Y., Chen, Y., & Chi, Y. (2024, May). Towards non-asymptotic convergence for diffusion-based generative models. In The Twelfth International Conference on Learning Representations.\n\n[3] Li, G., & Yan, Y. (2024). Adapting to Unknown Low-Dimensional Structures in Score-Based Diffusion Models. arXiv preprint arXiv:2405.14861."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. In line 269, it claims that the error bound holds provided that $T>> d \\log^2 T$ (here I simply choose $\\delta=1$). However, in line 247, it is stated that the result holds even when $T\\asymp d$. This appears to be a contradiction. Could you clarify the correct relationship between $T$ and $d$?\n\n2. What is the intuition behind introducing the generalized density in Section 4? It seems essential for the proof, but only the density of the auxiliary processes could become $\\infty$ at certain points? I am just curious about the reason of introducing those auxillary processes.\n\n3. What is the order of the constants in the error bound? e.g. in line 1127, it mentions that these constants need to be large enough. Could they be on the order of $O(d)$ or $O(T)$? If so, would this affect the order of the result of Theorem 1?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The $O(d/T)$ convergence rate achieved for SDE-based samplers matches the rate of ODE-based models, bridging the gap between these methods under relaxed assumptions. This is a notable advancement for SDE-based models, particularly in high-dimensional settings. The analysis requires only finite first-order moments of the target distribution, which is a much weaker condition than those in previous studies. By developing new tools to capture the propagation of errors through the reverse process, the authors provide an elegant framework that simplifies the analysis without resorting to intermediate metrics like KL divergence. This results in more direct and interpretable bounds."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper establishes a state-of-the-art convergence theory for diffusion probabilistic models, particularly focusing on SDE-based samplers. Under minimal assumptions, the authors derive an $O(d/T)$ convergence rate in total variation distance, substantially improving on prior results that typically require stronger assumptions or yield slower rates. The authors achieve this by introducing novel analytical tools that track error propagation across each step in the reverse process, leading to finer control over both discretization and score estimation errors."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I did not identify any major weaknesses in this paper. However, I have a few minor questions. See the next section."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weakness."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is generally well-written, and I mostly enjoyed reading it. The results are insightful, and the \"minimal\" conditions enhances our understanding of what is really the key to the success of score-based diffusion models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the (sampling) convergence of the DDPM, providing the status of the arts rate $O(d/T)$ under minimal condition."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There are several weaknesses and questions/comments.\n\n(1) The authors consider $TV(p_{X_1}, p_{Y_1})$ instead of the target distribution $p_{X_0}$ -- this is because score matching is often very bad close to time $0$ (in the continuous limit). People usually do \"early stopping\" to avoid this bad region (as did in Chen et al.) The authors may make explanation to better guide the readers.\n\n(2) In Assumption 1, the authors assume that $E|X_0|^2 \\le T^{c_M}$, meaning that the data size (depending on $d$) is bounded by polynomial in $T$. Is this a valid assumption? In the original paper of Song et al., $T$ is not set to be too large. In some sense, this condition already assumes a tradeoff between $T$ and $d$ implicitly. \n\n(3) Theorem 1: often for the convergence analysis there are three pieces: (1) initialization error, (2) score matching error and (3) discretization error. (2) and (3) are there, where is (1)? Probably it is absorbed in one of the terms and the authors should explain.\n\n(4) Theorem 1: the score matching contribution $\\varepsilon_{\\tiny \\mbox{score}} \\sqrt{log T}$ is impressive, which is dimension free. I would point out another work https://arxiv.org/abs/2401.13115, which proposed a \"contractive\" version, which also make the score matching contribution to be dimension free. However, it is at the cost of possibly larger initialization error, which requires to choose the hyperparameters carefully to balance the two. This brings back my question (3) on the contribution from initialization error in this paper's setting.\n\n(5) The authors proved the results for DDPM (or VP in the continuous time). I wonder if the arguments/results are specific to DDPM/VP. It is known that e.g., other popular models as VE can be obtained by a reparametrization trick (https://arxiv.org/abs/2206.00364). I think it may be possible to get the results for general class of models, which may be even more significant.\n\n(6) The authors only stated the convergence results for SDE sampler. What about the corresponding ODE sampler? Is there any expectation on even improving the rate using deterministic sampler?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024odt,\ntitle={O(d/T) Convergence Theory for Diffusion Probabilistic Models under Minimal Assumptions},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=4EjdYiNRzE},\nnote={under review}\n}"
},
"abstract": {
"value": "Score-based diffusion models, which generate new data by learning to reverse a diffusion process that perturbs data from the target distribution into noise, have achieved remarkable success across various generative tasks. Despite their superior empirical performance, existing theoretical guarantees are often constrained by stringent assumptions or suboptimal convergence rates. In this paper, we establish a fast convergence theory for a popular SDE-based sampler under minimal assumptions. Our analysis shows that, provided $\\ell_{2}$-accurate estimates of the score functions, the total variation distance between the target and generated distributions is upper bounded by $O(d/T)$ (ignoring logarithmic factors), where $d$ is the data dimensionality and $T$ is the number of steps. This result holds for any target distribution with finite first-order moment. To our knowledge, this improves upon existing convergence theory for both the SDE-based sampler and another ODE-based sampler, while imposing minimal assumptions on the target data distribution and score estimates. This is achieved through a novel set of analytical tools that provides a fine-grained characterization of how the error propagates at each step of the reverse process."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"score-based generative model",
"diffusion model",
"denoising diffusion probabilistic model",
"sampling"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/d5ef5b5bee1fd6c3220ba7d242e5098af2487b18.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning theory"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "O(d/T) Convergence Theory for Diffusion Probabilistic Models under Minimal Assumptions"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
4ExwvWAy9b | FactCheckmate: Preemptively Detecting and Mitigating Hallucinations in LMs | main | Active | Large Language Models;Hallucination Detection;Hallucination Mitigation;Factuality | interpretability and explainable AI | 3;3;5;5 | 5;5;4;4 | 2;2;3;2 | 2;1;3;2 | 3;1;3;3 | 4 | 4.5 | 2.25 | 2 | 2.5 | -1 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Both the detection and mitigation models are lightweight, resulting in minimal inference overhead, which is advantageous for practical applications.\n2. The approach is evaluated across various large-scale models, including Llama, Mistral, and Gemma, demonstrating its broad applicability."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel approach that predicts and mitigates hallucinations during the generation process by learning the internal representations of language models. This method is forward-looking in that it seeks to intervene before hallucinations occur."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe paper lacks an analysis of the generalizability of the learned classification network and intervention model. Specifically, it is unclear whether the trained classification and intervention models are generalizable across different large models and tasks. Given that the data collection for training was based on only three tasks, questions remain regarding the generalizability to other tasks. Is a new training dataset needed for additional tasks, or does the current model extend effectively?\n2.\tThe dataset construction raises some issues or lacks clarity. For certain tasks, it may be straightforward to judge whether the generated output is correct. However, in the case of generative tasks—particularly when the output is lengthy—it becomes challenging to determine whether the output from the large model is accurate, and thereby to ascertain whether the label indicates hallucination. This aspect is not thoroughly addressed in the paper.\n3.\tFor different large models, it is necessary to reconstruct training datasets and train distinct classifiers and intervention networks, making this process relatively complex. Although it may not increase inference time, the time required for dataset construction and model training is also significant and should not be overlooked.\n4.\tThere is a lack of analysis regarding the structure of the classifier and intervention models used. Specifically, the classifier is implemented as a two-layer MLP, and the perturbation model as a three-layer MLP. Details such as the hidden dimensions of these MLPs, and the potential performance impact of adding or reducing layers, are not discussed. Moreover, it is unclear how alternative models, such as transformers, might affect performance.\n5.\tThe paper does not provide specific details on the training setup for the classifier and intervention models, which creates challenges for reproducibility. In my view, training the classification network and intervention model should be central to this method, but there is limited discussion provided.\n6.\tThe experiments presented are insufficient to substantiate the effectiveness of the proposed method. The experimental section primarily compares the base model, but numerous methods already exist for hallucination detection and mitigation in large models, such as PoLLMgraph, SELFCHECKGPT, and TruthX. These more advanced baselines are not included in the comparisons. Additionally, the number of datasets used appears limited, potentially insufficient to demonstrate broad effectiveness across various tasks and datasets. I recommend conducting experiments on a wider range of datasets to strengthen the validation.\n7.\tClearly, this method, which relies on internal states, cannot be applied to black-box large models like GPT. This point should be included in the limitations section of the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "- Can the authors elaborate on how hyperparameter sensitivity impacts the intervention model’s reliability?\n- Is FACTCHECKMATE adaptable to more complex generative tasks, like dialogue or long-form generation?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- Extensive experimental results across different models and datasets are robust and demonstrate effectiveness.\n- Offers practical implications for real-world applications where factual accuracy is crucial, enhancing LM reliability."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces FACTCHECKMATE, a framework aimed at preemptively detecting and mitigating hallucinations in language models (LMs) by analyzing their hidden states before generating responses. The approach leverages lightweight classifiers to detect hallucinations early and an intervention model to adjust hidden states for improved factual accuracy."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Writing needs improvement, including but not limited to the abstract and introduction\n- Typos, e.g. Line 44 \"representaions\"\n- determining the factuality through merely probing the LMs' representations is not novel as a methodology\n- Limited exploration of other LM components beyond hidden states.\n- Generalizability of results is uncertain for tasks beyond QA."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "No ethics review needed."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "**Question 1** Could the authors test the method on abstractive summarization tasks to demonstrate whether it performs well in open-book settings? This would help validate the method’s applicability across different types of hallucinations.\n\n**Question 2** Could the authors verify the proposed method’s generality by evaluating its performance on dataset other than NQ-open? Understanding the method’s effectiveness across diverse datasets is essential for demonstrating its robustness.\n\n**Question 3** Could the authors apply the proposed method to benchmarks like Alpaca-eval? It would be interesting to see how the intervention affects the model’s performance on nominal questions and whether there is any degradation in accuracy due to false positives."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "**Strength 1** The paper introduces a new approach to detecting hallucinations by leveraging the internal representations of LMs.\n\n**Strength 2** The experimental design is solid, and the results effectively demonstrate the effectiveness of the proposed method.\n\n**Strength 3** The paper is well-written and easy to follow, with clear explanations of the methodology and results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores the possibility of preemptively detecting and mitigating hallucinations in language models (LMs). The authors present FACTCHECKMATE, a method that learns to predict whether an LM will hallucinate based on the model's hidden states before decoding begins. If a hallucination is predicted, FACTCHECKMATE adjusts the LM's hidden states to produce more factual outputs. The method leverages the rich signals provided by the internal representations of LMs. Experimental results demonstrate that FACTCHECKMATE achieves over 70% preemptive detection accuracy and increases the factualness of LM outputs by an average of 34.4% compared to no intervention. The inference time overhead introduced by FACTCHECKMATE is approximately 3.16 seconds."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Weakness 1** The paper focuses solely on close-book hallucinations, whereas many hallucinations occur in open-book settings, such as in abstractive summarization. Evaluating the method's effectiveness in handling open-book hallucinations would provide a more comprehensive understanding of its capabilities.\n\n**Weakness 2** The evaluation of the proposed method's factuality is conducted on the NQ-open dataset, and the classifier used is also trained on the same dataset. It remains unclear whether the method can generalize to other datasets, which is crucial for demonstrating the robustness of the approach.\n\n**Weakness 3**: There is no discussion regarding the potential impact of the proposed method on nominal (non-hallucinatory) questions. Since the classifier might have a false positive rate (FPR), it is important to understand how the intervention affects the performance on questions that do not contain hallucinations."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See the weakness."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Overall, the presentation is well-written and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents an approach to detect and mitigate hallucinations in language models (LMs) before they occur. The authors introduce a system named FactCheckmate, which leverages the hidden states of LMs to predict potential hallucinations. This is achieved through a classifier that assesses whether the LM is likely to produce a hallucination based on the internal signals from its hidden states. When a hallucination is detected, FactCheckmate intervenes by adjusting the hidden states to steer the model towards generating more factual content."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Although many methods for detecting and mitigating LLM hallucinations are outlined in the related work, the authors compare their approach with only one method. To convincingly demonstrate the superiority of their method, it would be prudent to include 3-4 baselines for both detection and mitigation aspects. Without this broader comparison, I cannot recognize the advantages of the authors' approach.\n\n2. I appreciate the experiments conducted on different open-source model families, but there is limited information on the performance of the method on Llama2 Chat and Llama3 Instruct models. Furthermore, it's unclear how the method performs on larger models like Llama2 70B and Llama3 70B. This raises questions about the scalability and generalizability of the proposed approach across various model sizes.\n\n3. While the authors compare their method, FactCheckMate, under random sampling conditions, its effectiveness significantly diminishes from previous levels above 60% to now below 50%. This indicates that FactCheckMate may not be as robust under varied sampling conditions.\n\n4. The claim of a 3.16-second average time in the abstract lacks rigor. Details about the GPU and CPU environments where these measurements were taken are not provided. Additionally, the use of 400 few-shot prompts does not offer a comprehensive view of performance. It would be beneficial to see how the method performs under long-context scenarios to better understand its effectiveness.\n\n5. Regarding the training of an intervention model, I can't find which dataset was used or discuss the hyperparameters involved in detail. More thorough discussion and transparency about the training conditions and parameters would enhance the credibility and reproducibility of the research."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Utilizing the inner workings of LMs’ to preemptively detect and mitigate hallucinations"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024factcheckmate,\ntitle={FactCheckmate: Preemptively Detecting and Mitigating Hallucinations in {LM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=4ExwvWAy9b},\nnote={under review}\n}"
},
"abstract": {
"value": "Language models (LMs) hallucinate. We inquire: Can we detect and mitigate hallucinations before they happen? This work answers this research question in the positive, by showing that the internal representations of LMs provide rich signals that can be used for this purpose. We introduce FactCheckMate, which preemptively detects hallucinations by learning a classifier that predicts whether the LM will hallucinate, based on the model's hidden states produced over the inputs, before decoding begins. If a hallucination is detected, FactCheckMate then intervenes, by adjusting the LM's hidden states such that the model will produce more factual outputs. FactCheckMate provides fresh insights that the inner workings of LMs can be revealed by their hidden states. Practically, both the detection and mitigation models in FactCheckMate are lightweight, adding little inference overhead; FactCheckMate proves a more efficient approach for mitigating hallucinations compared to many post-hoc alternatives. We evaluate FactCheckMate over LMs of different scales and model families (including Llama, Mistral, and Gemma), across a variety of QA datasets from different domains. Our results demonstrate the effectiveness of leveraging internal representations for early hallucination detection and mitigation, achieving over 70% preemptive detection accuracy. On average, outputs generated by LMs with intervention are 34.4% more factual compared to those without intervention. The average overhead difference in the inference time introduced by FactCheckMate is around 3.16 seconds."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Large Language Models",
"Hallucination Detection",
"Hallucination Mitigation",
"Factuality"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/c503945e847948703588b4a4a01f30df7c827240.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "FactCheckmate: Preemptively Detecting and Mitigating Hallucinations in LMs"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
4F1a8nNFGK | Context is Key: A Benchmark for Forecasting with Essential Textual Information | main | Active | Time series;forecasting;multimodality;foundation models;contextual forecasting;deep learning;machine learning;context-awareness | learning on time series and dynamical systems | 3;5;5;6 | 4;4;4;2 | 1;3;1;3 | 2;3;3;3 | 2;3;4;3 | 4.75 | 3.5 | 2 | 2.75 | 3 | -0.662266 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. How the textual contexts were annotated, including any guidelines, annotator expertise, and inter-annotator agreement (IAA) metrics?\n\n2. Do the authors envision methods to automate or partially automate this process, such as using existing NLP techniques to generate context?\n\n3. Could the authors provide explicit definitions for each capability and clarify the criteria used to categorize tasks?\n\n4. Could the authors clarify whether this discrepancy suggests limitations of the CRPS metric or other factors in the benchmark design?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1. Good writing and easy to follow\n2. CiK uniquely requires essential textual context for forecasting, marking a new direction in multimodal prediction.\n3. The benchmark is robust, with real-world tasks and a novel, context-focused RCRPS metric."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a new benchmark, CiK, to evaluate how well forecasting models can use essential textual context alongside numerical data to improve prediction accuracy. The benchmark includes 71 tasks across various fields, where models need to integrate natural language information—like historical trends or future events—with time series data for accurate forecasts. To assess performance, the authors develop the Region of Interest CRPS (RCRPS) metric, which emphasizes context-sensitive parts of the forecast and accounts for constraints stated in the text. Through experiments, they show that a simple prompting method for large language models (LLMs) outperforms traditional forecasting methods, underscoring the importance of context for improved predictions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Missing Information on Context Annotations: \nThe paper relies on carefully crafted textual contexts but omits crucial details about the annotation process, such as the guidelines provided to annotators, the number and qualifications of annotators, methods used to resolve disagreements, and quantitative measures of inter-annotator agreement (IAA). This lack of information raises questions about the consistency and reliability of the annotations. Including examples or a sample of the annotation guidelines, a description of annotator expertise, and the process for calculating IAA would strengthen the benchmark’s credibility and demonstrate rigorous annotation practices.\n\n2. Limited Benchmark Novelty: \nWhile the CiK benchmark combines existing time-series datasets with manually created textual contexts, its contribution to multimodal benchmarks is limited in novelty. The approach resembles prior work that integrates time-series data with textual sources like news or social media.[1][2] To clarify its uniqueness, the authors could provide comparisons to specific existing work and clearly articulate the novel contributions or improvements over these prior works. Additionally, the manual creation of contexts raises concerns about scalability; introducing semi-automated methods or leveraging AI to generate contexts could make the benchmark more practical for real-world applications and future expansions.\n\n3. Ambiguous Task Type Annotations: \nThe paper lacks clarity in task categorization, with no explicit definitions provided for each model capability category. For instance, “instruction following” is inconsistently applied, leaving tasks like “Public Safety” uncategorized, despite requiring instruction interpretation. It would be helpful if the authors included definitions for each capability category, specified criteria for categorizing tasks, and offered examples illustrating why certain tasks fall into each category. This additional information would clarify the task taxonomy and improve the interpretability of the benchmark structure.\n\n4. Unexplained Results Discrepancies: \nCertain performance discrepancies raise concerns about the validity of the benchmark’s metrics. For example, LLMP Mixtral-8x7B shows lower CRPS performance with context compared to without in Figures 4 and 5, yet it still appears to outperform traditional quantitative models when using context. This inconsistency suggests that CRPS may not fully capture the forecast quality in multimodal contexts. The authors could benefit from including a discussion on why CRPS was chosen, exploring alternative or complementary metrics, or providing a deeper analysis of the observed discrepancies to enhance the reliability of the reported results.\n\n5. Limited Model Variety: \nThe benchmark’s experimental setup primarily includes larger models like Llama-3 series, limiting the variety across model sizes and architectures, as well as smaller models like Mistral, Qwen, and Falcon. A more diverse set of models, including smaller or less resource-intensive models, could offer broader insights and improve the benchmark’s generalizability. Explaining any practical or strategic reasons for the current model selection would provide additional context. Exploration of smaller models or discussing plans for future testing would also enhance the paper’s impact.\n\n[1] Sawhney, Ramit, Arnav Wadhwa and Shivam Agarwal. “FAST: Financial News and Tweet Based Time Aware Network for Stock Trading.” Conference of the European Chapter of the Association for Computational Linguistics (2021).\n\n[2] Liu, Mengpu, Mengying Zhu, Xiuyuan Wang, Guofang Ma, Jianwei Yin and Xiaolin Zheng. “ECHO-GL: Earnings Calls-Driven Heterogeneous Graph Learning for Stock Movement Prediction.” AAAI Conference on Artificial Intelligence (2024)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Are all the time series is univariate? \n2. In this benchmark construction, have you tried very refined text rather than information at a specific time step or overall? How did it perform?\n3. Regarding retrieval, if I use the time series segment corresponding to a specific text as the retrieval \"text\", will the performance be better? Because this is more direct."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper provides rigorous evaluation, testing various model types and prompting techniques. The introduction of the RCRPS metric enhances assessment accuracy by factoring in context relevance and constraint adherence. The combination of text and time series has always been something that researchers in the field want to try, and this benchmark provides a good research foundation. The writing structure of the paper is very clear"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores the integration of contextual textual data with numerical time series to improve time series forecasting. It introduces the CiK benchmark, consisting of 71 diverse forecasting tasks across multiple domains. Unlike existing benchmarks, CiK requires models to process both numerical data and associated textual context, reflecting real-world complexities such as seasonal trends or future constraints (e.g., maintenance periods). The authors also propose a novel metric, the Region of Interest Continuous Ranked Probability Score (RCRPS), which weights context-sensitive time windows and penalizes constraint violations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "From the experimental results, we can see that text provides a good auxiliary role, but this method should be limited to models with LLM as the backbone."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "The paper does not provide any details about the manual curation process involved in creating the benchmark. Given the scale of data curation implied, it seems unlikely that this task could have been completed by a small group of authors without support from crowdsourcing, LLMs, or other manual annotators. The lack of discussion regarding these aspects raises questions about the claim of \"careful manual curation.\" If crowdsourcing or external labor was utilized, the absence of a description of the tasks, associated costs, and acknowledgment of contributors may hint towards uncredited or underpaid labor."
},
"flag_for_ethics_review": {
"value": [
"Yes, Responsible research practice (e.g., human subjects, data release)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Questions:\n1) Does the covariate information always imply the availability of the future values of the covariates, or are there examples with covariate information provided only for the history time series?\n2) While I believe it would be handled in a discussion of the manual curation process, I wanted to know if the entire manual curation for so many datasets was done by the authors who would be credited for the paper or if any form of crowdsourcing was utilized for the manual tasks. Were the annotators paid fairly for their efforts if any crowdsourcing was utilized? Was any LLM used during the manual curation process?\n\nSuggestions for the authors:\n1) Include a discussion on the manual curation process with information on the data sources and the selection of relevant context from them.\n2) Include benchmark descriptions mentioning the sequence lengths, prediction horizon, and number of sequences present in each benchmark to build confidence in the robustness of the work.\n3) Provide some ideas for separating different contextual information in the text.\n4) Highlight the efforts taken to ensure that any contextual information paired with the time series is actually correct/relevant. Do a similar task to highlight the relevance of the said \"region of interest.\""
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "This work is original and highly relevant to the current trend of LLM-based forecasting. The benchmark is well-designed, offering broad coverage across various domains and tasks, with results for multiple forecasters included to showcase its capabilities. The analysis and results are clearly presented, with examples and figures that effectively illustrate key benchmark characteristics and greatly enhance comprehension. Additionally, the paper introduces an intriguing new metric for evaluating forecast quality within relevant regions, which adds depth to the evaluation framework."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper addresses a pertinent issue: the scarcity of benchmarks for context-enhanced forecasting. Although many recent studies focus on predicting future values using textual cues, there is limited data available for training and evaluating such models. This paper introduces a manually curated benchmark specifically for text-aided time-series forecasting, featuring numerical time series data paired with contextual text. The benchmark is extensive in its selection of domains and tasks, providing a comprehensive resource. It is also thoroughly evaluated across a wide range of forecasting models. Furthermore, the paper introduces a novel metric for forecast evaluation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While this work is both novel and relevant, it lacks analytical rigor in its benchmark evaluation. The reader is supposed to assume that all provided textual information contributes meaningfully to the forecasts, with minimal evidence beyond the overall performance improvements seen for the entire dataset. Evaluating the relevance of the textual context—perhaps through methods such as LLM-based or human assessments—would strengthen the claim that these textual data are correct and relevant descriptions for the time series. Additionally, some covariate information appears to include future events, which a causal forecaster would not typically access (e.g., “Speed From Load” in Appendix B.2). This raises concerns about causal consistency, as there is no mechanism for systematically separating different types of contextual data other than through manual or LLM editing. Such limitations could present challenges for users who want to avoid incorporating future or irrelevant covariate information in their experiments.\n\nThe paper also lacks clarity regarding the historical context length and forecasting horizon—key details that should be specified. Furthermore, the reliability of the benchmark results hinges on the sample size, yet no information about the number of samples for the datasets is provided.\n\nPerhaps the most notable contribution of the paper is its manual curation of the dataset. However, this process remains underexplained. Details such as the curation methodology, sources of textual data, and the criteria used for selecting relevant data are absent, which limits the transparency of this work. A more comprehensive discussion of these aspects would significantly enhance the credibility and utility of the dataset for future research."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. can you just open-source all the related-materials and I think it's better to let all people to judge if it has enough value and easy to use.\n2. how do you choose the best-quality text and how would you access this quality?\n3. Is there any better way to cooperate the numbers in prompt to evaluate the effect of the benchmark?\n5. text brings more calculation, but the effect of the texts is well discussed. How to balance and choose the most effective one?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "I would say this is a 'huge' work, congratulations!\nThe following are some points I agree:\n1. the benchmark is relatively complete and has a potential to have impact, which include textual context (time-invariants, history, covariates, future and causals).This work may lead to complex or graph modelling with these information.\n2. the proposed strategy to evaluate is simple but useful.\n3. the paper contains some use case and discussion of existed models"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces \"Context is Key\" (CiK), a benchmark designed to evaluate the ability of forecasting models to integrate numerical data with essential textual context (time-invariants, history, covariates, future and causals...). Recognizing that accurate forecasting often requires more than just numerical inputs, the authors aim to bridge this gap by creating a collection of tasks that necessitate the use of both modalities. Some key contributions:\n1. a relatively complete benchmark named cik\n2. analysis of different models\n3. propose direct prompt, which is a simple strategy to prompt LLM to do time-serise prediction"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. lack of discussion of noise in texts. The texts are complete but the quality filter is not well designed. In general, we need more well-designed texts which is really useful\n2. what is the importance of texts? are they hidden in time-series itself? For me, time-series is the sampling result of a complex system, and even you have already done a lot and try to give more complete one, but the system is hard to define. As a result, the time-series itself may contain more information than the texts.\n3. besides model evaluation, the benchmark is hard to use as in real world, people may prefer to use simple model and may lack of texts.\n4. The cooperation between texts and numbers is well-designed. Generally, LLMs including GPT are not good at dealing with numbers, and they are not sensitive to the symbols in numbers."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A forecasting benchmark with problems that require the combined use of numerical historical data and textual context."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024context,\ntitle={Context is Key: A Benchmark for Forecasting with Essential Textual Information},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=4F1a8nNFGK},\nnote={under review}\n}"
},
"abstract": {
"value": "Forecasting is a critical task in decision making across various domains. While numerical data provides a foundation, it often lacks crucial context necessary for accurate predictions. Human forecasters frequently rely on additional information, such as background knowledge or constraints, which can be efficiently communicated through natural language. However, the ability of existing forecasting models to effectively integrate this textual information remains an open question. To address this, we introduce \"Context is Key\" (CiK), a time series forecasting benchmark that pairs numerical data with diverse types of carefully crafted textual context, requiring models to integrate both modalities. We evaluate a range of approaches, including statistical models, time series foundation models and LLM-based forecasters, and propose a simple yet effective LLM prompting method that outperforms all other tested methods on our benchmark. Our experiments highlight the importance of incorporating contextual information, demonstrate surprising performance when using LLM-based forecasting models, and also reveal some of their critical shortcomings. By presenting this benchmark, we aim to advance multimodal forecasting, promoting models that are both accurate and accessible to decision-makers with varied technical expertise. The benchmark can be visualized at https://anon-forecast.github.io/benchmark_report_dev/."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Time series",
"forecasting",
"multimodality",
"foundation models",
"contextual forecasting",
"deep learning",
"machine learning",
"context-awareness"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b186f4d3aea470b532b8c1c783ad8a0a7e6fa27c.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on time series and dynamical systems"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Context is Key: A Benchmark for Forecasting with Essential Textual Information"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
4FIjRodbW6 | Toward Robust Defenses Against LLM Weight Tampering Attacks | main | Active | ai safety;large language models;tamper-resistance;unlearning;meta-learning | alignment, fairness, safety, privacy, and societal considerations | 3;3;5;6;6;8 | 3;4;3;4;3;3 | 2;2;3;3;2;3 | 2;2;2;3;4;3 | 2;3;2;3;3;3 | 5.166667 | 3.333333 | 2.5 | 2.666667 | 2.666667 | -0.266076 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weaknesses section for questions and suggestions. I would be happy to change my opinion if my main concerns regarding fair evaluation of the defense method could be addressed."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. **Significance** of the problem. The paper addresses an important and challenging problem of defending open-weight large language models against finetuning attacks. In authors words: \"_This problem has been considered very challenging and by some intractable, as no method has yet provided substantial robustness to these attacks. However, making progress on this problem would provide a valuable tool to regulators and model developers by ameliorating the dual-use dilemma of open-weight models_\".\n\n2. **Originality**. Although most of the components of the proposed method are inspired from previous works (e.g. adversarial training, representation engineering), the overall approach constitutes substantial novel contribution to the field, to the best of my knowledge. The results on tamper-resistance for post-attack harmful accuracies demonstrate substantial improvement over previous methods (for most of the considered attacks)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a method (TAR) for improving tamper-resistance, i.e. model-level defense against adversarial finetuning attacks, of open-weight large language models. The method consists of several components: (1) initial safeguarding via _random mapping_ of harmful representations, (2) outer loop minimizing tamper-resistance and retain losses, (3) inner loop for computing tamper-resistance loss, which applies multiple finetuning attacks. Different design choices for tamper-resistance loss and its empirical significance are discussed: for weaponization knowledge restriction setting a negative _entropy_ loss is proposed, and for harmful request refusal a direct preference optimization (DPO) loss is used. The retain loss consists of language modeling loss and $l_2$-norm loss for representations of optimized and a base model. The results suggest the proposed method effectively defends the model against the majority of considered finetuning attacks, maintaining low accuracies on harmful questions post-(finetuning)-attack, although at the considerable cost of drop in accuracy on benign questions (pre-attack). Additionally, the authors acknowledge that the set of finetuning attacks during tamper-resistance training directly impacts the tamper-resistance against test-time attacks (e.g. \"Retain $\\rightarrow$ Forget\" attack breaks the defense if it is not included in the training phase), suggesting the defense might struggle with unseen attacks (e.g. PEFT-attacks could break the defense in many of the settings)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Evaluation** against out-of-distribution attacks. \n- My main concern is that the defense might be effective mostly against observed attacks, and it could break against other unseen attacks. For example, Table 4 in Appendix shows that \"Retain $\\rightarrow$ Forget\" attack breaks the defense if it is not included in the training phase. Figure 4, and Figure 8 from Appendix show that PEFT attacks are more effective than Full Parameter attacks (in case of Biosecurity, PEFT attacks break the proposed defense method), given that the TAR used Full Parameter attacks during training.\n- Therefore, more emphasis and red teaming effort should be put into unseen attacks during evaluation, e.g. the Post-Attack scores in Table 1 could be divided into \"in-distribution\" and \"out-of-distribution\" attacks, where the categorization of attacks should be agnostic to LR, LR scheduler, optimizer, number of steps, batch size. In other words, out-of-distribution attacks could be defined as those that use fundamentally different approaches or data sources than what was used during training, rather than just different hyperparameters. Testing against attacks that use different optimization algorithms, loss functions, or data distributions not seen during training could provide a more comprehensive assessment of the method's robustness.\n- Since PEFT finetuning is more compute-efficient than Full Parameter tuning, the variants of PEFT attacks with more optimization steps should be considered under the compute-constrained attacker setup. PEFT attacks should also be considered for harmful request refusal. \n- Other out-of-distribution attacks could also be proposed and evaluated, e.g. initializing an attack by perturbing the weights into a random direction before following gradients to avoid local potentially gradient obfuscated region; or running a ground-truth informed attack by following a fixed worst-case direction towards the weights of a harmful model to observe how far the optimization should run to get harmful results. \n- Input-level red teaming approaches (e.g. from HarmBench benchmark) could also be evaluated as alternative attacks, which do not include gradients or weight perturbations.\n\n\n2. More **detailed analysis** of the method is missing. The problem of **obfuscated gradients** should be addressed.\n- The results for Chemical Security in Table 1 suggest that post-attack harmful accuracies for TAR are lower than pre-attack ones, which is unexpected and worrying. Could you provide a more detailed analysis of this phenomenon? Could you investigate whether this is due to a quirk in your evaluation setup, or if it reveals something fundamental about how your method works? \n- Also the plots in Figure 6 in Appendix show that the loss values first start to increase under the gradient-based attacks, which is surprising. Could the loss decrease just by switching the gradient sign in the beginning of the optimization? This might point towards the problem of obfuscated gradients [a]. Other attacks, e.g. gradient-free ones, or the exploration of the loss landscape could provide a better understanding of the phenomenon. \n- Section A in Appendix states that post-attack accuracy on benign questions for TAR method is low. This should be reflected in the main paper, and the reasons for this phenomenon could be studied and discussed. Section C1 of Appendix addresses the problem of benign finetuning, however it does not provide comparison with benign finetuning of the base model (or other non-TAR trained harmless models). What percentage of harmful questions could appear in finetuning to prevent the benign learning? Could benign prompts from Over-Refusal benchmark [b] cause the issues with benign finetuning?\n- Over-Refusal benchmark [b] results should be included for the restricted models for full evaluation.\n\n3. The **capability-robustness tradeoff** could be studied and discussed more in-detail, since this is the main limiting factor of applying TAR comparing to baseline methods. \n- From Table 1, TAR is considerably *worse than baselines in terms of benign capabilities* in adjacent domain knowledge for about 10% in all domains. What about other capabilities such as reasoning, multi-lingual understanding, creative writing, coding etc?\n- Could the whole capabilities-robustness tradeoff curve be explored and demonstrated for TAR and for baseline methods by varying hyperparameters (e.g. such as $\\lambda_{TR}$)? Could a single metric for the method's tradeoff performance be proposed and compared between baselines, similar to Accuracy-Robustness Tradeoff Score (ART-score) in [c]?\n\n4. **Clarity**. \n- Many important parts of the paper are in Appendix, e.g. Random Mapping, attack categories, many crucial evaluations (see above). This makes the main paper less clear and prevents it from being self-sufficient.\n- Was a single model trained to be tamper-resistant against all 3 weaponization domains, or were 3 different TAR models considered (e.g. in Table 1)? Was a single model trained for both weaponization restriction and harmful request refusal, or 2 different ones? Could a single model be defended against all considered harms? How would it affect benign capabilities? Is the approach the same for baseline methods? It was not clear from the text for me. These details could be included in the experimental setup section, and you could include a diagram or table summarizing the model configurations used for different experiments.\n- What are the computational costs of TAR comparing to benign PEFT, Full Parameter finetuning and other baselines? \n- Minor comment: could the scores in Table 1 be scaled such that random model gets 0, and perfect model get 100? It would help visualizing the effectiveness of TAR more clearly.\n\n[a] Athalye, A., Carlini, N., & Wagner, D. (2018, July). Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International conference on machine learning (pp. 274-283). PMLR.\n\n[b] Cui, J., Chiang, W. L., Stoica, I., & Hsieh, C. J. (2024). OR-Bench: An Over-Refusal Benchmark for Large Language Models. arXiv preprint arXiv:2405.20947.\n\n[c] Nurlanov, Z., Schmidt, F.R., Bernard, F. (2024). Adaptive Certified Training: Towards Better Accuracy-Robustness Tradeoffs. In: Bifet, A., et al. Machine Learning and Knowledge Discovery in Databases. Research Track and Demo Track. ECML PKDD 2024. Lecture Notes in Computer Science(), vol 14948. Springer, Cham. https://doi.org/10.1007/978-3-031-70371-3_8"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "No ethics review is needed."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "+ In Figure 2, the difference between TAR and the baseline methods is significant. However, it seems that the capabilities of TAR are lower than the baseline methods. Is there a trade-off between capabilities and tamper resistance?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "**About contribution**\n\n+ The experimental results shown in Table 1 are significant enough to validate the main claims of this paper. \n+ The proposed method is intuitive. By providing detailed discussions of the related works, it is not hard to understand why the authors designed the algorithms as presented, even for readers not familiar with the defense of LLMs.\n\n**About novelty**\n\nAccording to Section 2, this paper proposes the first defense method for autoregressive LLMs against tampering attacks. To the best of my knowledge, concurrent jailbreaking attacks are mostly input-based. However, as claimed in Section 1, the tampering attacks are also posing threats to LLMs. This paper will bring new insight into the research on the robustness of LLMs.\n\n**About presentation**\n\n+ The preliminary part (Section 3) is brief and clear, making the technical part of this paper easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on the robustness of open-weight LLMs and proposes a novel defense method called TAR."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**About presentation**\n\n+ The authors do not discuss the cost of the experiments, including time cost and GPU memory cost. Section B.4 mentioned that the experiments use 8 A100 with 80GB GPU memory. What is the minimum requirement for the experiments? \n+ I suggest including a statement of contribution to make this paper easier to follow."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Insufficient sustainability. This paper proposes the integration of adversarial learning and meta-learning to enhance the effectiveness of defense mechanisms, making it difficult for attackers to compromise them in a short period. However, this effectiveness actually depends on the diversity of attack types included in the training data for optimizing eqn.1. In other words, the resilience of the proposed mechanism may be superficial and does not guarantee the security of open-weight LLMs. Furthermore, the authors do not provide corresponding theoretical analysis or proof.\n2. Incremental technical contributions. Although the paper is expressed clearly, its innovation is not evident in terms of both technical aspects and application scenarios. Specifically, the proposed solutions are based on existing widely used methods, and the authors have not clearly articulated their unique contributions. Therefore, it is recommended that the authors provide further clarification on this matter.\n3. The performance of the proposed mechanism is closely related to the adversarial training methods and data, which means its resilience remains a significant issue.\n4. The presentation of the performance comparison between TAR and existing mechanisms in Figure 5 is unclear and potentially confusing. The authors should provide further analysis of this result, explaining why the performance of TAR shows a significant change as the step size increases."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The security issues related to open LLMs are both important and intriguing; The authors present a series of solutions to address these security threats; and experiments validate the performance of the proposed mechanisms."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on the issue of the lack of robustness in open LLMs when facing model weight tampering attacks. The authors propose a method called TAR, designed to establish tamper-resistant protection mechanisms for LLMs, ensuring that attackers cannot compromise these protections after minimal optimization. Extensive experiments validate the performance of this approach, showing that it outperforms existing methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Insufficient sustainability. This paper proposes the integration of adversarial learning and meta-learning to enhance the effectiveness of defense mechanisms, making it difficult for attackers to compromise them in a short period. However, this effectiveness actually depends on the diversity of attack types included in the training data for optimizing eqn.1. In other words, the resilience of the proposed mechanism may be superficial and does not guarantee the security of open-weight LLMs. Furthermore, the authors do not provide corresponding theoretical analysis or proof.\n2. Incremental technical contributions. Although the paper is expressed clearly, its innovation is not evident in terms of both technical aspects and application scenarios. Specifically, the proposed solutions are based on existing widely used methods, and the authors have not clearly articulated their unique contributions. Therefore, it is recommended that the authors provide further clarification on this matter.\n3. The performance of the proposed mechanism is closely related to the adversarial training methods and data, which means its resilience remains a significant issue.\n4. The presentation of the performance comparison between TAR and existing mechanisms in Figure 5 is unclear and potentially confusing. The authors should provide further analysis of this result, explaining why the performance of TAR shows a significant change as the step size increases."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N.A."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "None"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The topic studied in this paper is of great significance. Malicious LLMs can cause serious harm, such as spreading false news about public figures, causing discrimination and unfair results due to prejudice, and generating violent terrorist information. Therefore, it is necessary to apply robust safeguards to open source LLMs.\n\n2. The proposed method successfully resists thousands of malicious fine-tuning."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel method called TAR, designed to enhance the robustness of large language models (LLMs) against tampering attacks, addressing significant vulnerabilities in existing safeguards."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The organization of Section 4 requires further refinement for improved readability. It would be beneficial to briefly outline the design motivation before delving into specific details, particularly regarding the content of Fig. 3. Additionally, the mathematical formulation of the loss function used is currently absent.\n\n2. The caption of figure 1 needs to explain the difference between the two branches more clearly. For example, what's the difference between the first nodes of the two branches.\n\n3. In Section 1, the authors assert that the proposed method can endure fine-tuning of up to 5000 steps. However, this claim does not intuitively convey the contribution of the paper. Firstly, the details surrounding fine-tuning, such as batch size and learning rate, are unclear; these parameters significantly influence the number of fine-tuning steps. Secondly, a comparative analysis with the typical number of steps required to fine-tune models on downstream tasks is lacking.\n\n4. The threat model lacks clarity. The authors assume that the attacker is compute-bounded but do not provide a clear definition of what this entails. Furthermore, including concrete examples of the metrics for capabilities_metric and safety_metric would enhance understanding.\n\n5. In section 4.2, the authors emphasized that the proposed method is different from standard meta-learning. However, the differences highlighted seem minor and do not present significant technical challenges.\n\n6. The term 'empirically' is employed multiple times in the methods section. While drawing conclusions from empirical observations is valuable for informing solution design, relying solely on empirical data, particularly in the selection of a loss function, may impose limitations on the solution's robustness. A theoretical analysis comparing the efficacy of the entropy loss function versus cross-entropy loss is necessary.\n\n7. The performance metrics used in the experiments require clear explanation. Based on the results presented, the proposed solution appears to sacrifice task performance in favor of enhanced robustness, suggesting potential flaws in the method."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I am curious about what kind of A_train in the objective in Eq(1) is required to have a good defense. For example, how diverse attacks need to be included; how many adversarial examples for each attack need to be included, etc."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This paper is written well and logically. I enjoy reading this work and every detail is properly described. The large-scale experiments are comprehensive and convincing."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies an important problem: fine-tuning attacks on LLMs. They propose a novel defense method, TAR, to improve the robustness of LLMs against possible malicious fine-tuning attacks. This method is based on adversarial training and meta-learning, and a novel training objective combining both an adversarial objective and a retaining objective is proposed to maintain utility. Extensive experiments are conducted to illustrate the effectiveness of proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The time complexity analysis is important but not included. Both adversarial training and meta-learning are time-consuming. The proposed method can be expensive especially when the model size is increasing. This brings a concern about whether this method is practical for protecting large models. I suggest the authors provide computation analysis either empirical or theoretical.\n\n2. There are many hyperparameters in either objective in Eq (1) or optimizing it, such as the number of outer loops and coefficients before tamper-resistance loss and retain loss (lambda_TR and lambda_retain). How they influence the defending performance is not discussed, and I suggest including it."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Are there any experiments to validate TAR on proxy metrics has a certain degree of generalization on other safety scenes or benchmarks?\n2. How about the training cost of TAR. Is it larger than original training?\n3. Except for the adversarial training, are there any novel findings or modifications of TAR, which suggest the novelty."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The defense of LLM's weight tampering attacks is an important topic. This work has great significance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes Tampering Attack Resistance (TAR) method which builds robust safe LLMs under weight tampering attacks. The method achieves superior performance on weaponization knowledge restriction and harmful refusal training."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. there are to many proxy objectives, like \"safety_metric\", \"capabilities_metric\". These pre-defined metrics will limit the significance and universality of the proposed method. Unless the author can prove that TAR is still working under other \"safety_metric\" or \"capabilities_metric\".\n2. From Eq.1, TAR is a simple adversarial training paradigm, with some proxy indicators. While adversarial training is an exist well known technique."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce the first safeguards for LLMs that defend against a significant number of fine-tuning attacks."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024toward,\ntitle={Toward Robust Defenses Against {LLM} Weight Tampering Attacks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=4FIjRodbW6},\nnote={under review}\n}"
},
"abstract": {
"value": "Rapid advances in the capabilities of large language models (LLMs) have raised widespread concerns regarding their potential for malicious use. Open-weight LLMs present unique challenges, as existing safeguards lack robustness to tampering attacks that modify model weights. For example, recent works have demonstrated that refusal and unlearning safeguards can be trivially removed with a few steps of fine-tuning. These vulnerabilities necessitate new approaches for enabling the safe release of open-weight LLMs. We develop a method, called TAR, for building tamper-resistant safeguards into open-weight LLMs such that adversaries cannot remove the safeguards even after thousands of steps of fine-tuning. In extensive evaluations and red teaming analyses, we find that our method greatly improves tamper-resistance while preserving benign capabilities. Our results demonstrate that progress on tamper-resistance is possible, opening up a promising new avenue to improve the safety and security of open-weight LLMs."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"ai safety",
"large language models",
"tamper-resistance",
"unlearning",
"meta-learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/026b149342bdc8a866261230c48fb31d8c439eac.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/5588295313b857bc7813bb2e3ff9779b43afcd61.zip"
},
"title": {
"value": "Toward Robust Defenses Against LLM Weight Tampering Attacks"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
4FRUNLuY54 | Dragonfly: Multi-Resolution Zoom-In Encoding Enhances Vision-Language Models | main | Active | Multimodel Language Model;Visual Instruction Tuning;Biomedical multimodal model;foundation model | foundation or frontier models, including LLMs | 3;5;5 | 4;4;4 | 2;2;2 | 2;2;2 | 3;2;3 | 4.333333 | 4 | 2 | 2 | 2.666667 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the weaknesses part. Additionally, is there a comparison of the inference time or flops of different methods?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Dragonfly uses multi-crop techniques to process images at high resolutions, thus enhancing the model’s capability to capture fine details. Meanwhile, it uses a simple mean-pooling strategy to reduce visual tokens effectively, which retains the efficiency of the model.\n2. The proposed method achieves competitive performance across multiple benchmarks and shows strong generalizability across both general and specialized biomedical tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces DragonFly to enhance vision-language models. The main idea is to combine the multi-cropping with mean pooling, so that the VLM can use high-resolution image encoders and work on images' native resolution, while ensuring the efficiency of the model. The proposed enhanced VLM performs well in tasks needing finer details, such as biomedical tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper uses a very simple strategy that combines multi-crop and mean-pooling to enhance VLM. However, the motivation for choosing mean-pooling instead of other compression techniques is not clearly stated. It's like an experimental report that simply states this method is effective. So why does mean-pooling outperform other strategies? Why do you choose such a pooling window? Will this direct pooling harm the extraction of the fine details? A more comprehensive analysis is expected."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please see weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The ablation studies proves their proposed strategy.\n2. Biomedical domain is considered by this paper.\n3. The motivation is nature and easy to follow.\n4. A SFT dataset with different domains and huge number of images is built."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces multi-crop techniques beyond the native resolution for high-resolution images. To handle the huge number of numbers, the authors employ the average pooling startegy on each crop. Except for general domain of benchmarks on fine-grained image understanding, this paper also introduce the contributioni on biomedical tasks. They also curate a SFT dataset including 2.4M general images and 1.4M biomedical images for training."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The novelty is limited. The proposed strategy is only an extension of any-resolution technical. Compared to any-resolution which uses two levels of rosulotions, they only resize the image, crop more patches and use three levels of resolutions of image.\n2. The visualizations only shows the response of the proposed Dragonfly, other MLLM's responses are encouraged to be listed for better comparision.\n3. To balance the computational costs and performance, they use mean pooling within each crop. The paper lacks the dicussion about how to choose a proper compressing ratio for trade-off."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weakness."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Dragonfly introduces an innovative multi-resolution zoom-in encoding strategy that surpasses native image resolutions, enabling the capture of intricate details from non-dominant objects, charts, and embedded text.\n2. It implements a simple yet effective mean-pooling aggregation method and achieves good performance across a diverse set of benchmarks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The manuscript presents Dragonfly, a novel Vision-Language Model (VLM) that employs a multi-resolution zoom-in encoding strategy to enhance fine-grained visual understanding. Unlike conventional Vision Transformers (ViTs) that downsample images to fixed, low resolutions—thereby losing critical details—Dragonfly processes images at higher resolutions and employs a multi-crop technique that exceeds the native resolution. This approach allows the model to capture intricate details from non-dominant objects, charts, and embedded text, which are often challenging for existing ViTs. To address the computational complexity arising from the increased token count, Dragonfly utilizes a mean-pooling aggregation strategy. The model demonstrates competitive performance across ten general-domain benchmarks and sets new benchmarks in several biomedical tasks, outperforming larger models trained on significantly more extensive datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Novelty. The proposed method builds upon existing multi-resolution and multi-crop techniques without offering substantial novel contributions. The idea of processing images at higher resolutions and using multi-crop strategies has been explored in prior works, and Dragonfly does not sufficiently differentiate itself beyond these established methods.\nModel Comparison: Dragonfly is developed using the more advanced Llama3 model, whereas comparable methods utilize less capable language models, such as Llama2 and Qwen2. This discrepancy raises concerns about the fairness of comparisons. How does Dragonfly's performance measure up when evaluated against these models?\nData Influence: It is unclear whether the observed performance improvements with Dragonfly stem from the curated data or from the model's design. How does Dragonfly perform when tested with a commonly used dataset?"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Dragonfly surpasses existing vision transformers by zooming in beyond native image resolutions, excelling in fine-grained detail extraction and setting new benchmarks in general and biomedical tasks."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024dragonfly,\ntitle={Dragonfly: Multi-Resolution Zoom-In Encoding Enhances Vision-Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=4FRUNLuY54},\nnote={under review}\n}"
},
"abstract": {
"value": "Recent advancements in vision-language models (VLMs) have highlighted the benefits of processing images at higher resolutions and leveraging multi-crop features to retain native resolution details. However, current vision transformers (ViTs) often struggle to capture fine-grained details from non-dominant objects, charts, and embedded text, limiting their effectiveness in certain tasks. In this paper, we push beyond the conventional high-resolution and multi-crop techniques by not only preserving but also zooming in past the native resolution of images. This enhancement allows our model to better extract fine-grained details, overcoming the limitations of current ViTs. To manage the increased token count and computational complexity, we show that a simple mean-pooling aggregation over tokens is effective. Our model, Dragonfly, achieves competitive performance on general tasks such as ScienceQA and AI2D, and excels in tasks requiring fine-grained image understanding, including TextVQA and ChartQA. On average, across ten general-domain benchmarks, Dragonfly ranks at the top, outperforming models that are significantly larger or trained on much larger datasets. Notably, Dragonfly sets new benchmarks on several biomedical tasks, achieving 91.6\\% accuracy on the SLAKE (compared to 84.8\\% for Med-Gemini) and a 67.1\\% token F1 score on Path-VQA (compared to 62.7\\% for Med-PaLM M). On biomedical image captioning tasks, {\\model} attains state-of-the-art results majority of the performance metrics."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Multimodel Language Model",
"Visual Instruction Tuning",
"Biomedical multimodal model",
"foundation model"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a4daba511b5363d3f69c8f7de06c7b03762c6bf3.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Dragonfly: Multi-Resolution Zoom-In Encoding Enhances Vision-Language Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
4FVGowGzQb | Preference Optimization as Probabilistic Inference | main | Active | Preference Optimization;Reinforcement Learning;Probabilistic Inference;Positive feedback;Negative Feedback | reinforcement learning | 3;3;5;8 | 3;3;3;3 | 2;2;3;3 | 2;2;2;3 | 3;3;1;3 | 4.75 | 3 | 2.5 | 2.25 | 2.5 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Can the authors clarify if PMPO is indeed implemented as the expression in #5 above? If not, could the authors clarify the differences? If so, can the authors clarify its significance?\n2. Can the authors clarify in what setting one would choose to use PMPO over other common baselines and why?\n3. Why doe the authors focus on DPO as the baseline? Why not consider other common methods in RL benchmarks? \n\nOther smaller comments:\n1. In the first paragraph of experiments, shouldn't $\\beta$ be $alpha$?\n2. What is the value of $\\beta$ used in Figure 2?\n3. Why is the MPO baseline in Figure 2 denoted with a dashed horizontal line?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The proposed PMPO and its derivation is, to my knowledge, novel. The objective is also easy to understand and implement, and the derivation has a clear probabilistic grounding in expectation maximization.\n2. The paper is clearly written and tackles a relevant topic to the ICLR community."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a preference learning objective (PMPO) that can utilize not just preference pairs but any combination of positive only or negative only samples. The objective is derived by defining an EM formulation for the expected success maximization objective of Eq 1 and defining the M step for both preferred and dispreferred samples. Experiments show that PMPO can operate with preference pairs as well as only preferred or only dispreferred samples."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While the method derivation is clear and well-motivated, the primary weakness in the work lies in the experiments.\n1. The proposed method does not outperform DPO, the main baseline being compared to.\n2. The experiments on bandit RL tasks focus on DPO as a baseline, without considering other methods used in these benchmarks.\n3. The DPO baseline does not seem to use all the data given to PMPO; for instance, the end of Section 5.1 states that DPO uses \"the best and worst action samples among the 4 sample archives\", rather than all samples.\n4. While PMPO can be adapted to a wider array of settings than DPO, for any given setting, it is not clear when and why one would choose to use PMPO over another method for that setting, e.g. PMPO on preferred only vs. SFT on preferred.\n5. Assuming a sequence-level forward KL term, the proposed objective seems to simply amount to a weighted average of positive log prob terms for large enough $\\beta$: namely, if $\\mathcal{J} = \\frac{1}{n} [\\alpha \\sum_{y \\in D_a} \\log \\pi_\\theta(y|x) - (1 - \\alpha) \\sum_{y \\in D_r} \\log \\pi_\\theta(y|x) - \\beta \\sum_{y \\in D_a \\cup D_r} \\log \\pi_{ref}(y|x) + \\beta \\sum_{y \\in D_a \\cup D_r} \\log \\pi_\\theta(y|x)$,\n\nthen for $\\beta > 1 - \\alpha$, the objective is a sum of positive log probs only. Indeed, Figure 3 suggests that a large $\\beta$ value is needed, which seems to suggest that PMPO works when it is close to supervised finetuning (with the only difference being a different non-negative weight for the preferred and dispreferred samples)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- How to optimize $\\alpha, \\beta$ in practice?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Tackling the relevant and complex problem of incomplete data in preference optimization, for example, only having access to a negative examples\n- Thorough and extensive related work making the contribution clear\n- Objective is intuitive and makes sense probabilistically, especially through the use of the prior\n- More flexible than methods like DPO and might apply to novel scenarios \n- Extensive empirical evaluation on a variety of tasks from control, rl, to llm preference optimization"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a preference optimization method that can utilize not only paired but also signal preferred or dis-preferred outcomes. The authors extend and improve on EM to tackle this problem and show empirical evidence favoring the proposed approach."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Does introduce new hyperparameters that are potentially non-trivial to tune ($\\alpha, \\beta$)\n- Title could be more specific. For example, something mentioning the capability to learn from dis-preferred examples. This could also help to attract readers interested in this particular problem. Currently, it seems only appealing to researchers interested in probabilistic inference.\n- Does not improve over DPO, but might also be due to missing datasets well-suited for the setup"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "See above"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The proposed method allows for training with unpaired examples and accommodates scenarios where only one type of feedback—positive or negative—is available, making it widely applicable across different contexts."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a preference optimization method that utilizes unpaired examples and can learn from either positive or negative feedback, addressing limitations in existing approaches by that need paired data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* Personally, I find the presentation of this paper lacking. The main formulation in equation (10) is quite intuitive and provides a straightforward extension of the previous pairwise method to a more general setting. However, the derivations in Sections 3.1 and 3.2 are tedious and difficult to follow. I question the necessity of such extensive derivation from the expectation-maximization (EM) framework. It seems possible that the authors formulated the equation first and then sought a probabilistic framework to justify it. If this is the case, I strongly encourage the authors to present equation (10) prominently and follow it with a brief explanation of its connection to EM in a small subsection or appendix.\n\n* I would particularly like to see a more rigorous comparison, using the Gemma-2B model, both methods could be trained on the pairwise UltraFeedBack / LMSYS-chat-1M dataset and then evaluated on the Alpaca Eval benchmark. If PMPO cannot match DPO, it will be important to delineate the limitations of PMPO and understand when it is appropriate to use this approach."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How does using DPO with all (positive, negative) pairs from the 4 samples compare to PMPO?\n2. Is there an issue with pairing positive and negative responses from unpaired datasets?\n3. Why does using both accepted and rejected responses for language model alignment perform worse?\n4. Can you explain Figure 5, in particular, the different cutoffs?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper addresses the problem of being able to use unpaired data and allowing for general preference distributions. In particular, they present an objective that is derived from maximizing an expectation that is motivated by existing methods. They perform experiments on multiple datasets and demonstrate that the theory is applicable through varying $\\beta$ for only positive and only negative samples."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a way to learn from unpaired preference data which is a constraint for other algorithms such as DPO. The method is motivated by expectation-maximization methods and results in an objective that weights positive and negative samples and applies cross entropy to positive samples and negative cross entropy to negative samples with KL regularization. They demonstrate their method on different benchmarks including bandit settings, DeepMind control suite, and language model alignment."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The main concern is whether the issue of unpaired preference data is a major problem and whether the experiments present a fair comparison between DPO and the proposed methods. In particular, if there are unpaired preference labels for a given state, it seems like a simple fix would be to pair each positive outcome with each negative outcome. Additionally, in the experiments, while the other baselines had access to all 4 samples, only DPO had access to 2. While the preferences should be paired, there is no restriction on having one pair of preferences per state. It would be more clear that the paper addresses an important issue if pairing samples from an unpaired dataset does have poor performance.\n\nFurthermore, it would be more clear if it was mentioned that the reference model was updated as at the end of section 5.1, it is mentioned that there is a slowly changing reference. This varies from the original DPO which has a fixed reference. \n\nThere also seems to be a significant drop in performance with PMPO using both accepted and rejected responses in Figure 4. Furthermore, using all responses does not result in a higher peak than only using accepted responses which is concerning as it seems that using more data actually leads to worse performance. Additionally, there are seemingly incomplete or disrupted lines in Figure 5."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A new algorithm that learns from different type and number of feedback (positive, negative, or both) to optimize policies."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024preference,\ntitle={Preference Optimization as Probabilistic Inference},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=4FVGowGzQb},\nnote={under review}\n}"
},
"abstract": {
"value": "Existing preference optimization methods are mainly designed for directly learning from human feedback with the assumption that paired examples (preferred vs. dis-preferred) are available. In contrast, we propose a method that can leverage unpaired preferred or dis-preferred examples, and works even when only one type of feedback (positive or negative) is available. This flexibility allows us to apply it in scenarios with varying forms of feedback and models, including training generative language models based on human feedback as well as training policies for sequential decision-making problems, where learned (value) functions are available. Our approach builds upon the probabilistic framework introduced in (Dayan & Hinton, 1997), which proposes to use expectation-maximization (EM) to directly optimize the probability of preferred outcomes (as opposed to classic expected reward maximization). To obtain a practical algorithm, we identify and address a key limitation in current EM-based methods: when applied to preference optimization, they solely maximize the likelihood of preferred examples, while neglecting dis-preferred samples. We show how one can extend EM algorithms to explicitly incorporate dis-preferred outcomes, leading to a novel, theoretically grounded, preference optimization algorithm that offers an intuitive and versatile way to learn from both positive and negative feedback."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Preference Optimization",
"Reinforcement Learning",
"Probabilistic Inference",
"Positive feedback",
"Negative Feedback"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/6eeef4e493821db54ea734249a27dac75f5fe32d.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/fef70a68f479be33c3ce18b6b256577743314ee4.pdf"
},
"title": {
"value": "Preference Optimization as Probabilistic Inference"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
4FWAwZtd2n | Scaling Test-Time Compute Optimally Can be More Effective than Scaling LLM Parameters | main | Active | test-time compute;LLMs;scaling;language models | foundation or frontier models, including LLMs | 5;6;8;8 | 3;4;5;3 | 3;4;4;3 | 3;3;4;4 | 3;3;3;3 | 6.75 | 3.75 | 3.5 | 3.5 | 3 | 0.406181 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. My understanding from the paper is that different FLOPs budgets of SFT can also be taken into consideration, and we may be able to teach LLMs better specialized skills (e.g., revision) with more deliberate post-training. I don't see much ablations on the revision SFT in the paper. Do you think that could be the case?\n2. This is an interesting and insightful paper. I have another question (better called thought) -- what if we scale on both test-time compute and pre-training compute, would there be another optimal solution under FLOPs-matched comparison? The paper only focuses on 1-to-1 exchange (scaling test-time compute only) but maybe that would also an interesting topic."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is fluent and coherent. The authors have clearly made their points so that the ideas they propose can easily be followed.\n2. The paper carries out comprehensive experiments and analysis on the research questions they study, i.e. 1) scaling test-time compute optimally and 2) exchanging pre-training and test-time compute.\n3. The experiments are sound.\n- For the first question, the baseline they choose and develop over, e.g. best-of-n and revision, are strong, which makes their points more convincing. And the study on sequential to parallel ratio is also helpful for readers to better understand the merits of different test-time inference methods and how they contributes to the \"compute-optimal\" strategy.\n- For the second question, the setup for FLOPs matched comparison is plausible and the ablations on model size and question difficulties are insightful, making their conclusions convincing and easy to understand."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper carries out comprehensive study on test-time compute. The authors study the topic from the following perspectives:\n1. The first question they study is \"How to scale test-time computation optimally\". The authors propose an adaptive \"computer-optimal\" strategy for scaling up test-time compute. They evaluate the method against test-time inference methods, e.g., searching against dense, process-based verifier reward models, and updating the model's distribution over a response adaptively, given the prompt at test time. They study how to optimally combine these methods to reduce the test-time budget and achieve better performance.\n2. Then they study to what extent test-time computation can substitute for additional pre-training, by comparing test-time inference on a smaller model compared to pre-training a ~14x larger model, under a FLOPs-matched comparison.\n\nFinally, the authors summarize their results with a finding that states *\"in some settings it is more efficient to pretrain smaller models with less compute, and then apply test-time compute to improve model outputs\"*."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Yet the adaptive \"compute-optimal\" strategy achieves great performance over other test-time baselines and can be used to substitute for pre-training in some cases, for me it's more like a proof-of-concept.\n- The experiments are all on MATH and \n- there're still many hyper-parameters under-explored, e.g.,\n - different verifiers;\n - different design of difficulty bins (number of bins and how to estimate difficulties),\n\nalong with the ones that have been explored in the paper:\n - types of test-time methods to select from (my understanding is we can have a set of many test-time methods to adaptively select from)\n - test-time compute need for different methods\n\nIt'll be much clearer and more helpful if the authors can discuss more about the possible \"hyper-parameters\", compensating the framework described in Eq. 1. (I see some of them are in the appendix, e.g. design of difficulty bins, but it would be much clearer for me if they can be put and discussed at the same place)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. What do the different colors in Figure 3 (left) represent? It appears the legend is missing.\n\n2. What is the accuracy of the difficulty prediction? If it is quite accurate, the results with model-predicted difficulty may not be compelling because they rely on a powerful model for estimating difficulty and do not account for the associated computational cost, which is unrealistic. In real-world scenarios, using a powerful model entails significant costs, while employing a less powerful model may result in inaccurate difficulty predictions. Therefore, it would be valuable to demonstrate whether the observation holds true with a less powerful difficulty estimation model.\n\n3. What is the accuracy of the verifier? Does the performance of the verifier influence the observations?\n\n4. Is the accuracy of the verifier consistent across different difficulty levels? The observations might be influenced by the accuracy of the verifier.\n\n5. How many test questions are there in each difficulty bin? Is it consistently 100 questions per bin?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The problem addressed in this paper is highly meaningful. This paper provide systematic analysis of different approaches for scaling test-time computes in LLMs. The observations could inspire further research and is beneficial for advancing the entire field.\n\nThe paper is well written and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper target the challenges in effectively utilizing additional computation at test time to improve the accuracy of their responses, particularly for complex tasks. This is important to explore how to tradeoff inference-time and pre-training compute. This paper trying to understand the scaling behaviors of test-time inference methods. This work analyze main mechanisms, and observed that the effectiveness of recent methods varies depending on the specific problem and the base LLM used.The observation motivates applying a “compute-optimal” scaling strategy, which acts to most effectively allocate test time compute adaptively per prompt. This approach selects the most effective method for utilizing additional computation based on the specific prompt and question difficulty."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Both the PRM and ORM models utilize the PaLM 2-S* base language model, which is costly. However, the computational cost associated with PRM during test time is not clearly defined and calculated.\n\n2. In a specific domain, analyzing the trade-off between pre-training and test-time computation is more meaningful, but the paper primarily focuses on pre-training. As mentioned in the paper, \"we expect test-time compute to be most helpful when models already have all the basic “knowledge” needed to answer a query, and instead the primary challenge is about drawing (complex) inferences from this knowledge.\" Pre-training can be leveraged to learn the basic \"knowledge,\" while fine-tuning might be more effective in teaching the model how to draw complex inferences. Additionally, fine-tuning can be more efficient, providing greater benefits for the same computational cost.\n\n3. Whether the observations are consistent across different datasets and models remains unclear. \n\n4. I think some details are unclear, which may affect the strength of the observations. Since the main contribution of this paper is conducting analysis using methods proposed in previous works, clarifying these details is important."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- It seems all PRM results are presented in the way of *weighted* Best-of-N. Is PRM alone not working? If so, is it fair to compare lookahead search with the other two, given that the value in lookahead search can not be augmneted by majority vote?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "1. The paper provides a unified perspective on test-time compute, namely modifying either the proposal distribution or searching with an external verifier. This framework helps systematically understand and analyze different methods.\n\n2. The authors conduct comprehensive experiments on the MATH dataset using PaLM-2 models, comparing different test-time compute strategies across varying problem difficulties. The analysis is thorough, with detailed ablation studies showing the effectiveness of both mechanisms they proposed and the trade-offs between different approaches. \n\n3. The paper's findings have significant practical implications for model deployment. The demonstration that test-time compute can sometimes substitute for model size suggests a path toward deploying smaller, more efficient models while maintaining performance through clever use of inference-time computation.\n\n4. The paper provides very insightful analysis on searching with PRMs and iterative self-refinemet, both of which have drawn lots of attention from researchers but still lack in-depth understanding. I pretty much enjoy the detailed analysis on the three ways of use of a PRM."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the best way to scale test-time computation in LLMs to improve their performance on challenging tasks (MATH). The authors analyze two primary mechanisms: (1) searching against dense, process-based verifier reward models, and (2) updating the model's proposal distribution over responses through sequential revision. They find that the effectiveness of different test-time compute strategies varies significantly based on question difficulty. Using this insight, they propose a \"compute-optimal\" scaling strategy that adaptively allocates test-time compute per question, achieving 4× better efficiency compared to best-of-N baselines. In a FLOPs-matched evaluation, the authors demonstrate that for problems where a smaller base model shows some success, test-time compute can outperform a 14× larger model."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. All experiments are done solely on the MATH dataset, which not only raises concerns on the genralizability of the conclusion, but also seems unfair when compared to scale up training time compute. Given multiple downstream datasets, we only need to scale up training time compute for once while have to scale up inference overhead every time. In this case, if we match the total test time compute with training time one, each task will be allocated much fewer inference time budget, which may lead to very different conclusions. \n\n2. When scaling inference time compute, dedicated efforts are needed to improve the verifier performance or models' capability of self-correction, which may also consume a non-trivial proportion of compute. This may also lead to unfair comparison to training time compute scaling.\n\n3. Most presentations are excelllent but I there are still some minor issues. was a bit confused when reading Figure 3 (Left) since there is no legends and the overlap of bars make colors even more complex. Also, the main body oof the paper should be self-constrained, so it may not be quite suitable to put the results in Sec 5.3 to Figure 14 which is inconvenient to readers."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Both [1] and [2] talk about the categorization of scaling inference-time compute. [1] separates inference compute into proposer and evaluator which is fairly similar to what this paper has proposed. [2] uses a more fine-grained definition to demonstrate different techniques. I think both of them should be cited and discussed accordingly.\n- Figure 3 (left) is a bit hard to interpret. What does each color mean?\nWriting and Styling\n- Line 076 \"We find that...\". The wording is a bit awkward. \n\n[1] Wang et al., Reasoning in Token Economies: Budget-Aware Evaluation of LLM Reasoning Strategies\n\n[2] Saad-Falcon et al., Archon: An Architecture Search Framework for Inference-Time Techniques"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- The unification of inference-time computing methodologies is useful. There are a sea of inference-time compute strategies, and the unification of this makes the discussion of it much easier. Although I have concerns that some other papers talk about inference-time compute method categorization, for details, see Question (1) below. This is not major, but I wish the authors can address and clarify.\n- This unification also makes the experiments fairly comprehensive and thus having more confidence in the results.\n- Trying to do a fair comparison between inference-time scaling and training-time scaling is novel. I think the authors have framed a good problem and showed a nice analysis."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors are investigating the relationships between scaling inference-time compute and training-time compute. With that said, the paper first investigates and characterizes different types of inference-time compute. There are two main reasons: the first is to change the LLM proposal distribution by having an additional set of tokens, and the second is to use a verifier to rank the best response. Building on top of this, they try to unify those two inference-time compute strategies together, and create a compute-optimal inference-time strategy that's conditioned on question/prompt (more specifically, question difficulties). Lastly they draw comparison to training-time scaling, and find that it is more beneficial to scale inference-time for easier questions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- I partially disagree with the claim that the authors made at the end of the introduction. I think what the authors have shown doesn't mean scaling test-time computing can be preferable to scaling pretraining compute. For both medium and hard questions, using the same compute to scale pertaining works much more in favor of scaling inference compute. The advantage only comes in for easier questions, which I would argue is less important. Plus one can always do inference scaling for larger models. I think this claim may need to be justified more. \n- When training a verifier, is it more fair to include the compute to train the verifier as part of the inference-time compute? Similarly for the fine-tuned revision model.\n- It is a bit difficult to fathom what \"compute optimal\" is exactly. How is this obtained or how is it optimized? I understand that strategies are selected based on question difficulties but providing the exact detail would be nice. \n- The separation of Verifier and Revision is a bit confusing as both require a PRM. The main distinction I think between sections 5 and 6 is one is using search and another is using revision."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We find that by optimally scaling test-time compute we can outperform much larger models in a FLOPs matched evaluation."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024scaling,\ntitle={Scaling Test-Time Compute Optimally Can be More Effective than Scaling {LLM} Parameters},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=4FWAwZtd2n},\nnote={under review}\n}"
},
"abstract": {
"value": "Enabling LLMs to improve their outputs by using more test-time computation is a critical step towards building generally self-improving agents that can operate on open-ended natural language. In this paper, we scale up inference-time computation in LLMs, with a focus on answering: if an LLM is allowed to use a fixed but non-trivial amount of inference-time compute, how much can it improve its performance on a challenging prompt? Answering this question has implications not only on the achievable performance of LLMs, but also on the future of LLM pretraining and how to tradeoff inference-time and pre-training compute. Little research has attempted to understand the scaling behaviors of test-time inference methods, with current work largely providing negative results for a number of these strategies. In this work, we analyze two primary mechanisms to scale test-time computation: (1) searching against dense, process-based verifier reward models; and (2) updating the model's distribution over a response adaptively, given the prompt at test time. We find that in both cases, the effectiveness of different approaches to scaling test-time compute critically varies depending on the difficulty of the prompt. This observation motivates applying a ``compute-optimal'' scaling strategy, which acts to most effectively allocate test-time compute adaptively per prompt. Using this compute-optimal strategy, we can improve the efficiency of test-time compute scaling by more than 4x compared to a best-of-N baseline. Additionally, in a FLOPs-matched evaluation, we find that on problems where a smaller base model attains somewhat non-trivial success rates, test-time compute can be used to outperform a 14x larger model."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"test-time compute",
"LLMs",
"scaling",
"language models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/4d2e5a38334f84198af474801f91dd6955a4b5fe.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Scaling Test-Time Compute Optimally Can be More Effective than Scaling LLM Parameters"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
4G6Q4nJBTQ | Evaluating Fairness and Mitigating Bias in Machine Learning: A Novel Technique using Tensor Data and Bayesian Regression | main | Active | Fairness;Bias mitigation;Skin color;Computer vision;Bayesian regression | alignment, fairness, safety, privacy, and societal considerations | 3;3;3 | 4;4;4 | 2;2;2 | 1;2;1 | 2;1;2 | 3 | 4 | 2 | 1.333333 | 1.666667 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weaknesses"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Extend categorical groups by representing skin color as distributions on which\nWasserstein Distance can be applied. The method is generically applicable to multi-\ndimensional and continuous data.\n\n2. A new latent bias mitigation method is proposed for individual fairness that leverages\nBayesian regression estimation of performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses individual fairness when the sensitive attribute is skin color. Most\nliterature deals with categorical sensitive features, while skin color is a tensor and even\nits annotation can be often lacking. The proposed method avoids classifying the color\ninto categories, and aims to capture fine-grained nuances in fairness. Instead, it\nrepresents it into probability distributions and apply Wasserstein distance, based on\nwhich Bayesian regression with polynomial functions is used to estimate the\nperformance. Finally, the latent bias is mitigated by reweighting the cross-entropy loss\nwith the prediction performance (after softmax)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Although skin color is an important fairness indicator and its continuity fits the\nmotivation of the paper, it appears a significant limitation to only consider skin\ncolor. There are many other continuous sensitive features, and the paper didn’t\nconsider in the experiment. Is it because they are too easy and do not unleash\nthe full power of the method (which can be applied to tensors)? It will be\ninteresting to see the effectiveness of the proposed method on other continuous\nvalued attributes.\n2. What about using the logit of multi-class or multi-label classification of skin color?\nThe current color distribution is constructed in an unsupervised fashion. So how\ncan we guarantee that eventually what is learned/extracted is not targeting other\nfeatures of skin, say, coarseness. Although I do agree that color is probably the\nmost salient feature of skin, does it mean the method has to be hand-tuned for\neach domain?\n3. The presentation is very unclear in Section 3.2. Is $n$ the batch size that was\nset to 1% of the validation dataset? If the distances $d_i$ are all with respect to\n$x_0$, then does it mean that the performance of $n$ instances are based on\njust one baseline image $x_0$?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "All the questions and suggestions are mentioned in the Weaknesses section."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Introduces a novel approach to fairness by representing skin color as continuous tensor data, avoiding traditional categorical groupings.\n2. Uses Bayesian regression and Wasserstein Distance to capture individual-level fairness without requiring categorical annotations."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a method for measuring fairness and mitigating bias in machine learning models that handle skin color as tensor data rather than traditional categorical labels. The approach leverages probability distributions and Wasserstein Distance, to capture detailed variations in skin tone, allowing for an individualized fairness assessment. The paper proposes a Bayesian regression model that predicts performance outcomes based on these nuanced skin color distributions, rather than on coarse demographic categories. Additionally, the study introduces a training method that mitigates bias through a weighted loss function, penalizing model performance inversely to the predicted fairness distance. This approach aims to reduce latent biases within and across typical group classifications, thus improving fairness in image classification tasks without requiring skin color annotation. The empirical results demonstrate a reduced correlation between skin tone and prediction accuracy."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Insufficient Coverage and Comparison with Related Works:**\nThe paper does not provide a discussion on dependence-based methods [6-11] or adversarial representation learning approaches [1-5], both of which are established techniques for debiasing machine learning models. While the setting of this study is distinct, the continuous skin tone attribute extracted in the initial phase of this method could also be applied in models handling continuous attributes, aligning with those frameworks.\n\nI have listed some relevant works below that are capable of handling the data type used in your method, providing potential baselines for comparing the proposed approach:\n\n\n[1] Wang, Tianlu, et al. \"Balanced datasets are not enough: Estimating and mitigating gender biases in deep image representations.\" ICCV, 2019.\\\n[2] Roy, Proteek Chandan, and Vishnu Naresh Boddeti. \"Mitigating information leakage in image representations: A maximum entropy approach.\" CVPR, 2019.\\\n[3] Edwards, Harrison, and Amos Storkey. \"Censoring representations with an adversary.\" arXiv, 2015.\\\n[4] Xie, Qizhe, et al. \"Controllable invariance through adversarial feature learning.\" NeurIPS, 2017.\\\n[5] Madras, David, et al. \"Learning adversarially fair and transferable representations.\" ICML, 2018.\\\n[6] Dehdashtian, Sepehr, et al. \"Utility-Fairness Trade-Offs and How to Find Them.\" CVPR, 2024.\\\n[7] Sadeghi, Bashir, et al. \"On characterizing the trade-off in invariant representation learning.\" TMLR, 2022.\\\n[8] Sadeghi, Bashir, et al. \"Adversarial representation learning with closed-form solvers.\" ECML-PKDD, 2021.\\\n[9] Quadrianto, Novi, et al. \"Discovering fair representations in the data domain.\" CVPR, 2019.\\\n[10] Chzhen, Evgenii, et al. \"Fair regression with Wasserstein barycenters.\" NeurIPS, 2020.\\\n[11] Jiang, Ray, et al. \"Wasserstein fair classification.\" UAI, 2020.\n\nWithout comparison to a baseline from the list above, it may be challenging to accurately assess the performance of the proposed model and validate the paper's claimed contributions.\n\n## Minor Edits\n1. There appears to be an unidentified reference in line 46.\n2. In line 207, the phrase \"Assuming the distributions are IID\" may be inaccurate; it seems more likely that \"samples are i.i.d.\" was intended."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See \"Weaknesses\".\n\n_**Justification of Rating**_ \n\nThe paper presents one noteworthy concept: the treatment of sensitive attributes as continuous variables rather than discrete categories. This approach is particularly well-suited for skin color, where it can be implemented straightforwardly through pixel value statistics. However, this single contribution, while valuable, is insufficient to warrant acceptance in its current form.\nA more comprehensive contribution would develop a framework capable of handling various sensitive attributes as continuous variables (such as age) rather than limiting the scope to skin color alone. The current implementation, while promising in concept, remains too narrow in its application and theoretical development.\nTwo significant deficiencies further impact the paper's potential acceptance:\n\n1. The absence of comparative analysis against existing methods makes it impossible to evaluate the practical benefits of this approach. Without such benchmarking, the methodology's advantages remain purely theoretical.\n2. The paper's organizational structure requires substantial improvement to effectively communicate its contributions and methodology.\n\nThese limitations, combined with the narrow scope of the primary contribution, lead me to recommend against acceptance in its current form. However, with expanded scope, rigorous comparative analysis, and improved organization, this work could develop into a significant contribution to the field."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The approach of treating physical characteristics as continuous variables, rather than discrete categories, is compelling. This applies not only to skin tone, but also to other demographic attributes (eg. age, perceived gender,...) and physical features (eg. hair color, perceived attractiveness, ...). While the idea of adopting continuous representations isn't novel [1,2] and the proposed method applies only for skin tone, the idea of implementing it without requiring annotated data presents an interesting research direction.\n\n[1] Kumar, Neeraj, et al. \"Attribute and simile classifiers for face verification.\" 2009 IEEE 12th international conference on computer vision. IEEE, 2009.\n\n[2] Moeini, Ali, et al. \"Regression Facial Attribute Classification via simultaneous dictionary learning.\" In Pattern Recognition, volume 62, pages 99-113, 2017. DOI: https://doi.org/10.1016/j.patcog.2016.08.031"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper addresses the conventional approach to skin tone annotation by proposing a novel method that treats skin tone as a continuous variable rather than a categorical classification. The authors leverage the raw values obtained through Individual Typology Angle (ITA) measurements, utilizing these continuous measurements before their traditional conversion into discrete categories. Building upon this continuous representation, they develop a bias mitigation framework that incorporates the distance information derived from these ITA values to create a regularized training loss function. Experiments are conducted on established group fairness benchmark datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Although I understand that the scope is to focus on skin tone, the study is a bit limited as the proposed methodology seemingly doesn't transfer to any other attribute of interest (also other attributes may benefit from treating them in a continuous range of values, rather than as categorical variables, eg. \"age\").\n2. The proposed methodology raises several concerns regarding its novelty and effectiveness. The required preprocessing step appears to be a general solution that could be applied to any existing method, rather than a unique contribution. Furthermore, the training process relies heavily on conventional binary classification with regularization, without demonstrating significant innovation. The absence of comparisons with state-of-the-art unfairness mitigation techniques makes it difficult to evaluate the method's relative merits. Most critically, the lack of baseline comparisons leaves readers unable to assess the tangible advantages this approach might offer over existing solutions.\n3. The manuscript would benefit from several structural and technical refinements. In terms of organization, the contributions section should be relocated to the end of the introduction. The current list of contributions requires revision: contributions #2 and #3 should be consolidated as they represent a single advancement, while contributions #4 (experimental validation) and #5 (code sharing) should be removed as they represent standard research practices rather than novel contributions. The related works section should conclude with a clear paragraph distinguishing this study from existing literature. Additionally, the paper needs technical cleanup, including addressing various grammatical errors and misspellings, adding a missing reference on line 46, and improving the legibility of Figure 3, which is currently difficult to read."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024evaluating,\ntitle={Evaluating Fairness and Mitigating Bias in Machine Learning: A Novel Technique using Tensor Data and Bayesian Regression},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=4G6Q4nJBTQ},\nnote={under review}\n}"
},
"abstract": {
"value": "Fairness is a critical component of Trustworthy AI. In this paper, we focus on Machine Learning (ML) and the performance of model predictions when dealing with skin color. Unlike other sensitive attributes, the nature of skin color differs significantly. In computer vision, skin color is represented as tensor data rather than categorical values or single numerical points. However, much of the research on fairness across sensitive groups has focused on categorical features such as gender and race. This paper introduces a new technique for evaluating fairness in ML for image classification tasks, specifically without the use of annotation. To address the limitations of prior work, we handle tensor data, like skin color, without classifying it rigidly. Instead, we convert it into probability distributions and apply statistical distance measures. This novel approach allows us to capture fine-grained nuances in fairness both within and across what would traditionally be considered distinct groups. Additionally, we propose an innovative training method to mitigate the latent biases present in conventional skin tone categorization. This method leverages color distance estimates calculated through Bayesian regression with polynomial functions, ensuring a more nuanced and equitable treatment of skin color in ML models."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Fairness",
"Bias mitigation",
"Skin color",
"Computer vision",
"Bayesian regression"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/6370f6f753ed3a13cc635cb6d462d505b7df8c1d.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Evaluating Fairness and Mitigating Bias in Machine Learning: A Novel Technique using Tensor Data and Bayesian Regression"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
4GD7a9Bo9A | Bias Learning: Quantifying and Mitigating Position Sensitivity in Text Embeddings | main | Active | Deep Learning or Neural Networks;Similarity and Distance Learning;(Application) Information Retrieval Regression;(Cognitive/Neuroscience) Language;(Other) Statistics | interpretability and explainable AI | 3;3;6;6 | 4;5;4;3 | 3;1;3;3 | 3;1;3;3 | 3;3;2;3 | 4.5 | 4 | 2.5 | 2.5 | 2.75 | -0.707107 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "N/A"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1. The positional bias in text embeddings can be an important aspect for research in long embedding models.\n2. The paper is clear written and easy to read."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the phenomenon of positional bias in text embeddings. It observed that perturbations at the beginning of the texts affect the embeddings more than the changes at other parts of the texts. To reduce the positional discrepancy, a position-aware data sampling technique is proposed, where parts of the training data are sampled from later parts of the texts. Experiments show that after post-training with the technique, the resulted embeddings show reduced sensitivity to positions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The experiments are lacking. The paper does not evaluate any text embedding benchmarks for retrieval performance. Many easy-to-run pipelines exist to evaluate on benchmarks such as MTEB, and there is no excuse to leave it to future work. It is important to evaluate on real retrieval task because it is unproven whether positional bias is harmful or not since such bias might naturally exists in real data. \n2. For section 4.3, the writing bias exists in the training data. To argue that the cause of the position bias is not human writing, it is better to shuffle the training data, train another embedding model and to see if the resulted embeddings still exists in the newly trained model."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Could PADS be integrated directly into model pre-training, rather than as a post-training adjustment? If so, how would this affect computational efficiency and model performance?\n- How might the proposed approach impact the design of long-context models?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper provides a comprehensive investigation of positional bias across multiple embedding models, input sizes, and document types.\n- The Position-Aware Data Sampling (PADS) method is an innovative proposal to counteract positional bias, and the experiments show measurable improvements.\n- The paper uses diverse datasets, making validation of the conclusions sonds."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates positional biases in embedding models used in information retrieval (IR) and semantic similarity tasks, revealing that these models give disproportionate weight to the beginning of text inputs. By conducting experiments with insertion and deletion of irrelevant text at various document positions, the authors find that text at the beginning influences the model’s output embeddings more significantly than text in later sections. They attribute this bias to positional encoding techniques and training practices. To address this, the paper proposes a novel data augmentation method called Position-Aware Data Sampling (PADS), which mitigates the effect of positional bias, thereby enhancing model robustness for longer texts."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The analysis is limited to a few embedding models and positional encoding types, which may restrict generalizability to other architectures or languages.\n- Cosine similarity as the primary evaluation metric may not fully capture how positional bias affects end-task performance (e.g., retrieval accuracy, relevance ranking)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Could the authors clarify how shuffled sentence embeddings effectively remove human writing biases?\n2. Could they demonstrate PADS’s benefits on real-world IR tasks to show practical value?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Previous work mainly investigates positional bias in encode-decoder models while the work focus on positional bias in embedding models.\n2. The paper is clearly written, providing a well-organized presentation of embedding models and positional encoding techniques."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates positional sensitivity in encoder-based models. Through text insertion and removal experiments, the paper reveals that embeddings exhibit a bias towards early text positions, especially in models using APE and RoPE. It performs regression analysis to eliminate the effects of human writing styles. It proposes PADS method to help mitigate this bias."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Although the authors claim their regression analysis controls for human writing style, it’s not entirely convincing. Human writing conventions (e.g., important content at the beginning and end) are likely embedded in models trained on large corpora. Testing with shuffled sentences may not fully isolate this influence from model bias.\n2. The results show ALiBi’s greater robustness compared to APE and RoPE, yet the paper does not investigate the underlying reasons, which would enhance its analytical contribution."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "### 1. Questions\n\n- I am confused by this claim “Embedding models, in contrast, are theoretically position-invariant, due to their lack of the causal attention mask”. To the best of my knowledge, there are some LLM-based text embeddings like [1] with causal attention mask, although the causal attention mask may limit the performance of the embedding on STS tasks [2]. \n- I wonder if inserting irrelevant text or removing text will affect the coherence or change the semantics of the original text.\n- Long-context embeddings tend to capture topic-level semantics. it is more coarse-grained than short-context. Perturbation may not show significant changes in the results. Since two of the Alibi models are longer-context (>512 tokens), that might be the reason why it is more robust than RoPE. Could you separate the reported performance between long-context and short-context models for Table 2?\n- The selected embedding models are BERT-based and contain the special token CLS at the initial. Is it a possible reason why it prioritizes at the beginning? I guess the priority depends on the selected backbone. It would be appreciated if you could experiment with autoregressive LLM-based embeddings to see whether they will be prioritized in the last part of the text. This will make this work more comprehensive.\n\n\n**Reference**\n\n[1] Lee, J., Dai, Z., Ren, X., Chen, B., Cer, D., Cole, J. R., ... & Naim, I. (2024). Gecko: Versatile text embeddings distilled from large language models. arXiv preprint arXiv:2403.20327.\n\n[2] Li, X., & Li, J. (2024, June). BeLLM: Backward Dependency Enhanced Large Language Model for Sentence Embeddings. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) (pp. 792-804).\n\n\n### 2. Typo and Suggestions\n\n- L20: with with -> with
\n- L39: “needles” -> ``needles’’\n- Citation format: use \\citep in L184, L191, L194, and L197.\n- It is better to provide the full name of APE and RoPE in the abstract.\n- In the experiment, it is better to take the pooling strategy into account."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- This work investigates an interesting phenomenon: mainstream text embedding models disproportionately prioritize the initial portion of the input.\n- Comprehensive ablation studies and regression analysis are conducted to research the bias position issue in text embeddings.\n- The findings are helpful for text embeddings and suggest a future research direction for long-context embedding models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the impact of content position and input size on text embeddings and reveals current mainstream embedding models disproportionately prioritize the initial portion of the input text. Extensive ablation studies and regression analyses are conducted to investigate the position bias issue."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- For text embeddings with CLS pooling, such as bge-m3, bge-large-en-v1.5, and UAE-Large-V1, prioritizing the initial part of the text should be advantageous. For text embeddings with average pooling, which part is prioritized doesn't seem to matter since it's global pooling. \n- As claimed embeddings are important for information retrieval (IR) and semantic textual similarity (STS), but there is no experiment to show how such an initial priority phenomenon affects STS or IR tasks."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We demonstrate the existence of positional biases in text embedding models and investigate data augmentation methods to address these effects."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024bias,\ntitle={Bias Learning: Quantifying and Mitigating Position Sensitivity in Text Embeddings},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=4GD7a9Bo9A},\nnote={under review}\n}"
},
"abstract": {
"value": "Embedding models are crucial for tasks in Information Retrieval (IR) and semantic similarity measurement, yet their handling of longer texts and associated positional biases remains underexplored. In this study, we investigate the impact of content position and input size on text embeddings. Our experiments reveal that embedding models, particularly APE- and RoPE-based models, disproportionately prioritize the initial portion of the input. Ablation studies demonstrate that insertion of irrelevant text or removal at the start of a document reduces cosine similarity between altered and original embeddings by up to 12.3\\% more than ablations at the end. Regression analysis further confirms this bias, with sentence importance declining as position moves further from the start, even with with content-agnosticity. We hypothesize that this effect arises from pre-processing strategies and chosen positional encoding techniques. To address this, we introduce a novel data augmentation scheme called Position-Aware Data Sampling (PADS), which mitigates positional bias and improves embedding robustness across varying input lengths. These findings quantify the sensitivity of retrieval systems and suggest a new lens towards long-context embedding models."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Deep Learning or Neural Networks",
"Similarity and Distance Learning",
"(Application) Information Retrieval Regression",
"(Cognitive/Neuroscience) Language",
"(Other) Statistics"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/f3a1d195b7e2304c74aa08d11b5603079b0d835e.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/f78a8f28b1c71342862791865900ca7ee302a481.zip"
},
"title": {
"value": "Bias Learning: Quantifying and Mitigating Position Sensitivity in Text Embeddings"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
4GJVU31mF7 | Unified Music-Language Model for Symbolic and Waveform Integration | main | Active | Music Language Model;MultiModal Language Model;Music Understanding;Music Generation | applications to computer vision, audio, language, and other modalities | 3;3;5;6 | 4;3;4;4 | 2;2;3;3 | 3;2;2;2 | 2;1;2;3 | 4.25 | 3.75 | 2.5 | 2.25 | 2 | 0.555556 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "no concern"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. one main weakness of existing text-based music LM (e.g. chatmusician) is that they can only answer the type of questions that they have been trained on (either in the pertaining stage or instruction fine-tuning state) and fail to generalize on new types of questions. Have you looked into this problem? (I am assuming that you use one fine-tuned model to handle all kinds of text-based queries rather than applying one LoRA per type of query)\n\n2. the bar-level alignment is interesting. How is the aligned data prepared -- synthesizing the ABC into audio or applying MIR tools to audio? Could you handle audio whose tempo is unstable, both in the training and inference stages?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Comprehensive Multimodal Integration: The model successfully combines symbolic and waveform music representations, addressing a major limitation in existing models and enabling better music understanding and generation.\n\nInnovative Bar-Level Alignment: The bar-level tokenizer offers a novel approach to synchronizing symbolic and audio representations, improving the model’s ability to process and generate music in a contextually relevant manner."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents UniMuLM, a Unified Music-Language Model designed to integrate symbolic music, waveform music, and textual instructions into a cohesive framework. The model addresses the challenges posed by the distinct representations of music (symbolic notation vs. audio waveforms), aiming to leverage their complementary strengths. Key contributions include:\n\nUnified Tokenization: Introduction of a bar-level tokenizer that aligns symbolic and waveform music representations to enable fine-grained, mutually reinforced understanding.\n\nMulti-Stage Training Strategy: The model is trained in three stages to inject knowledge, align music representations, and fine-tune the system for various music-related tasks.\n\nComprehensive Evaluation: Demonstrates state-of-the-art (SOTA) performance across different music tasks, validating the effectiveness of integrating multiple music modalities."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Inconsistency and Misalignment in Title, Problem Formulation, and Paper Focus\n\nThe title, \"Unified Music Language Model,\" suggests a comprehensive system capable of generating audio, which is misleading since the model does not generate audio directly. Instead, it is an \"audio-informed, text-based music language model\" that can handle both music description and symbolic music generation.\nThe problem formulation further contributes to the confusion, as it mainly addresses a music description problem. However, the actual contributions focus more on using audio information to enhance symbolic music generation and music understanding, indicating a disconnect between the proposed problem, the title, and the work's real impact.\n\n\n2. Suboptimal Baselines and Limited Impact of SOTA Claims\n\nThe choice of baselines for music generation, such as ChatMusician and MUPT, undermines the significance of the model's claimed state-of-the-art performance. Both baselines are first-of-its-kind general-purpose multimodal music models, but with subpar generation quality compared to dedicated symbolic generation models like Music Transformer or the more advanced whole-song generation via hierarchical diffusion models.\n\nA similar issue exists in the music understanding benchmarks. Using Mu-LLaMa as a baseline, while suitable for demonstrating language model integration, fails to compare favorably against specialized Music Information Retrieval (MIR) tools, which excel in task-specific performance. The broader question remains whether integrating music information into a text-based language model leads to genuinely superior performance.\n\nUltimately, the novelty of integrating music data into language models has become less groundbreaking. The field has matured, and the critical evaluation should focus on whether this integration yields better performance. Based on the provided demos, the symbolic music generation quality lags behind specialized models, and in music QA tasks, errors were evident, as seen in 2nd and 3rd showcased examples."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.\tDataset Construction Process: Could you clarify the dataset construction process, particularly regarding any differences in cleaning and downsampling strategies across datasets? A detailed explanation of the sampling methods used for each dataset would enhance transparency.\n\n2.\tMulti-Track Music Processing Details: While the paper discusses bar-level tokenization for single-track music, it lacks details on aligning multi-track waveforms with ABC notation. Could you provide more information on how alignment is managed for multi-track music?\n\n3.\tExploration of General Model’s Music Knowledge: Given that models like GPT-4 and GLM-4 are noted for their in-context learning capabilities, have you explored prompt engineering with these models to assess their music theory knowledge? This could offer a fairer basis for comparison.\n\n4.\tMusic Understanding Performance Analysis: The UniMuLM model performs better on shorter-text datasets compared to longer-text ones. Could you elaborate on this performance variation? Additional analysis would provide valuable insights into the underlying factors.\n\n5.\tMore Details about the Subjective Evaluation: how many participants joined the subjective test? What are their music backgrounds? What’s more, your demo page only provides single-track samples and what about the multitrack generation results? How do you ensure consistency across different raters.\n\n6.\tAbout the Waveform Music Generation: intuitively, when seeing your paper’s title, I expect to see the comparison between text-waveform and text-symbolic generation. From my perspectives, that’s the main challenge and concern of symbolic-waveform music alignment work, to support the controllable waveform generation and more diverse symbolic generation simultaneously by getting the embeddings from two domains closer. Therefore, if possible, please discuss what are your initial findings, results or the challenges you've encountered within the text-waveform task, as mentioned in your future work part. It really benefits the whole AI music community."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This paper introduces a music-language model capable of encoding cross-modality data for symbolic music and waveforms. It addresses the challenge of temporal consistency by using a bar-level tokenizer to align music waveforms with notation, employing contrastive and reconstruction losses to enhance alignment between symbolic and waveform information. \n\nThe paper is well-written, with a clear and well-defined problem statement that makes the methodology and contributions straightforward and easy to understand. This work offers a valuable approach to unifying multiple input modalities in music through a unified tokenizer, paving the way for enhanced music understanding and more controllable music generation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a Unified Music-Language Model that integrates symbolic music and waveforms through bar-level tokenization inspired by bar-patching [1], addressing the issue that current music language models struggle to handle both notation and performance due to temporal scale inconsistencies. \nThe proposed approach follows a three-stage training strategy. First, it incorporates music knowledge to warm up Llama-3 with foundational music theory concepts. Second, a bar-level tokenizer is trained on paired symbolic and waveform data using contrastive and reconstruction losses. Finally, LoRA-tuning is applied to adapt the model for various downstream tasks involving diverse music representations.\nBy uniting these two modalities, the model enhances both music understanding and generation capabilities, showing advantages over single-modality models in three downstream tasks: music theory injection, waveform-based music understanding, and symbolic music generation.\n\n[1] Shangda Wu, Dingyao Yu, Xu Tan, and Maosong Sun. Clamp: Contrastive language-music pre-training for cross-modal symbolic music information retrieval. In ISMIR, pp. 157–165, 2023."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Limited Novelty in Modality Alignment: This paper is not the first to align audio waveforms with symbolic representations. For example, JASCO [1] employs ‘nearest’ interpolation for chords and ‘linear’ interpolation for melody, resampling them to match EnCodec’s frame rate. To strengthen the paper’s contribution, it would be helpful to emphasize the specific advantages offered by your alignment strategy. For example, how your bar-level tokenization differs from or improves upon interpolation-based approaches in terms of preserving musical structure or handling different types of musical elements.\n[1] Or Tal, Alon Ziv, Itai Gat, Felix Kreuk, and Yossi Adi. “Joint Audio and Symbolic Conditioning for Temporally Controlled Text-to-Music Generation.” arXiv preprint arXiv:2406.10970, 2024.\n\n2. Marginal Improvement on Waveform Music Understanding Tasks: The model demonstrates limited improvement over Mu-LLaMA on 3 out of 4 datasets for waveform music understanding tasks. This raises questions about the actual benefit of incorporating symbolic information to enhance waveform audio understanding. Providing further exploration or justification of the advantages of symbolic data for audio understanding would strengthen the paper. For example, you can provide a more detailed analysis of where and why your model shows improvements or limitations compared to Mu-LLaMA and discuss specific examples or task types where symbolic information seems to help or hinder performance.\n\n3. Ignorance of Difference between Real-world Waveform and Synthesized Waveform: the alignment stage does not train on the real-world waveform, which might perform different from the synthesized waveform. I understand large-scale pair data lacks, but you can still use some data augmentation strategies such as using different soundbanks to render the symbolic music, or applying some transcription tools (e.g. MT3[2]) to get the coarse symbolic representation and fine-grain them to ensure valid format via GPT-4. I think it would be better to discuss in your paper the potential impact of using synthesized vs. real-world waveforms on their model's performance.\n[2] Gardner, J., Simon, I., Manilow, E., Hawthorne, C., & Engel, J. (2021). MT3: Multi-task multitrack music transcription. arXiv preprint arXiv:2111.03017.\n\n4. Minor Typos: There are minor typos in the description of Figure 3, such as “constrastive loss” and “corss-reconstruction losse.” In Section 4.3, symbols in formula (4) do not correspond with the symbols in the textual explanations above."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Questions are mentioned in the weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The proposed model incorporates data from multiple modalities and provides a general interface for generation and understanding. Music features from the audio domain and the symbolic domain are considered and proved helpful for the task."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a fine-tuned language model for music understanding and generation. The music representations used in this paper are text-like ABC format, audio format, and a representation jointly learned from symbolic and audio domain music data. The input to the model is text and music, and the output is text or music in ABC format. The model demonstrates superior performance in some tasks in generation and understanding."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The title is misleading, as it suggests the language model itself integrates multiple modalities, whereas the actual integration occurs primarily at the tokenization level.\n2. The paper's writing lacks clarity and rigor. Figure 3 is confusing because there are no distinctions between inputs and outputs and no explanation of color coding for adapters and tokens. The notation is sloppy; symbols like L_c appear in the figure but are not defined in anywhere else in the paper. In the main text, terms like LM(), Adapter(), and Symbolic-Encoder() are presented as informal expressions rather than proper mathematical functions.\n3. The integration of audio and symbolic data is bounded by the fact that paired audios are synthesized. The quality in the demo page is not convincing.\n4. Figure 1 feels promotional, yet it's hard to tell what are the nine tasks after finish reading (e.g., what is MCQ-inpainting). Additionally, the experiments only assess performance at a superficial level. The paper would benefit from deeper analysis to demonstrate what the model actually gains from modality integration. Providing additional demos, case studies, and comparisons either in the appendices or on the demo page would strengthen the evaluation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "* The study lacks mention of ethical review or participant demographics, particularly if invited participants with diverse music backgrounds were included.\n* The paper utilizes several music datasets without clarifying their copyright statuses. For instance, some datasets, like LP-MusicapsMSD, are not publicly accessible, raising questions about how the authors secured permissions or accessed copyrighted content for this research."
},
"flag_for_ethics_review": {
"value": [
"Yes, Potentially harmful insights, methodologies and applications",
"Yes, Responsible research practice (e.g., human subjects, data release)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Typographic and Formatting Issues:\n* The numbering in the paper is inconsistent (e.g., Section 3 contains only one paragraph, Section 4.1 includes only 4.1.1).\n* Figure 1’s color distinctions are difficult to discern, and Figure 3(a) is complex, potentially making it harder to interpret than the accompanying text.\n* Formulae could benefit from simplification. Besides, some notation feels excessive or potentially confusing. For instance, is “Zbar” intended to be “Z_{bar}”?\n* In Table 1, could the authors clarify the meanings of \"Quantity\" and \"Sampled\"? Do these refer to token count and instruction-answer pairs?\n* Many typo errors\n\nExperimental & Dataset Details:\n* Please clarify the precision during training. Is the 4-bit training precision sufficient? Half-precision training is typically float16 (16-bit), and 4-bit precision may impact training stability or gradient calculations. Did the authors consider gradient overflow or stability issues, or was 4-bit precision specifically chosen for other reasons? (Or maybe 4B=32bit, which is the common training precision)\n* How was MIDI data transformed into ABC notation, and how was bar/beat-level annotation achieved? Transcribing MIDI to bar-level music sheet is not trivial. The quality of this annotation could significantly impact bar-level information integrity."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The authors introduce a novel and well-motivated approach that leverages multiple music representations for a unified model, which is an important advancement for multimodal music understanding. The proposed method and experimental design appear to be robust, addressing significant challenges within the field and offering promising results on music theory, acoustic music understanding and symbolic music generation tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents UniMuLM, a unified music language model designed to address the limitations of existing music language models (MuLMs) that typically rely on a single representation type. UniMuLM integrates symbolic music, waveform audio, and textual instructions through a novel bar-level tokenizer to facilitate fine-grained, multi-modal music understanding. To handle the complexities of multimodal alignment and stabilize training across these varied representations, the authors implement a multi-stage training strategy. Their empirical evaluation on nine music-related tasks indicates UniMuLM’s performance improvements over current state-of-the-art methods, underscoring the benefits of unified, multi-representational learning for advancing MuLMs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper’s primary claims highly overstate the experimental outcomes due to the following reasons.\n* The claim of 9 music tasks might be miscounting. Or maybe the author (over)claims the four music captioning datasets as four tasks (theory, caption, QA and generation). According to my understanding, it should be a music theory benchmark, three music caption/description datasets, 1 musical, and 2 types of music generation with 2 different types of evaluation methods. Please clarify what are the nine different tasks.\n* as the evaluation lacks comparisons with recently released in the past 9 months, advanced baselines, such as SALMONN, GPT-4o and Qwen2-audio, are relevant to benchmark improvements, which may provide much better results on music theory and music captioning. For example, Qwen-audio and SALMONN tech reports include the sota performance on music captioning and GPT-40 is well-known for its audio instruction following capability\n* Additionally, several tasks are missing comprehensive evaluation metrics (e.g., BERT-score and METEOR for music captioning, which are widely used and much more persuasive compared to the BELU score reported in this paper).\n* The paper claims the alignment of 3 modalities, but it does not explore the direct alignment of symbolic and audio modalities without intermediate text, e.g. audio transcription to ABC notation, which limits insights into tasks. \n* Further, ablation studies are absent for the loss functions introduced in stage two of training, leaving uncertainty around the necessity and optimal weighting of each component. This does not make the methodology proposed in stage 2 solid. You can run the experiments on changing the loss weights or delete part of the loss\n* The author claims the impact of bar-level tokenization. However, there is no ablation study on not using such tokenization. Besides, the author does not clarify which dataset requires bar-level information for the model to evaluate. Please clarify why the 4 or 9 tasks are contributed by the bar-level information you provided and show the experimental results if the bar-level tokens indeed help. Maybe it does not align well or screw up the performance by increasing the length of tokens"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024unified,\ntitle={Unified Music-Language Model for Symbolic and Waveform Integration},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=4GJVU31mF7},\nnote={under review}\n}"
},
"abstract": {
"value": "Music is a unique and essential modality constituting human life, presenting challenges for multimodal advances due to its complex structure and intricate details. Recent Music Language Models (MuLMs) facilitate music understanding and generation by leveraging the inherent knowledge and reasoning capabilities of pre-trained Language Models (LMs), yet they overlook the complementary benefits of different music representations. To this end, we propose a unified music language model, named UniMuLM, form the existing approach of using a single representation to multiple music representations. Concerning the unification, we address the challenges of missing modalities and unstable training to adapt different scenarios. Specifically, we integrate symbolic, waveform music, and textual instructions into an LM and design a bar-level tokenizer to explore the fine-grained correlations between different modalities. Moreover, we propose a multi-stage training strategy to progressively enhance this synergy. Trained on open-source datasets, UniMuLM demonstrates superior performance compared to SOTA methods across 9 music tasks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Music Language Model",
"MultiModal Language Model",
"Music Understanding",
"Music Generation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/e5ee5ed5b6db710b7a8bfdd4b025d1ffc703ef68.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Unified Music-Language Model for Symbolic and Waveform Integration"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
4GSOESJrk6 | DreamBench++: A Human-Aligned Benchmark for Personalized Image Generation | main | Active | personalized image generation;subject-driven image generation;personalization;image generation;benchmarking;human-aligned evaluation | datasets and benchmarks | 3;5;5;6 | 4;3;4;3 | 2;2;2;3 | 2;2;2;3 | 3;3;3;3 | 4.75 | 3.5 | 2.25 | 2.25 | 3 | -0.688247 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Can you do specific comparisons between DREAMBENCH++ and existing benchmarks like DALL-EVAL and HRS-Bench. For example, a comparison table showing the number of samples, types of skills assessed, and evaluation metrics used in each benchmark is necessary. This would help clarify the unique contributions of DREAMBENCH++.\n\n2. May I ask if the authors have considered developing new evaluation metrics, proposing improvements to existing personalized T2I models based on benchmark insights?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "I appreciate that the authors engage a lot of effort to build a new benchmark, including design prompt, collecting images, and conducting experiments among several existing T2I models. The paper is written well and clear, the figure is informative, which is good."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed a new T2I evaluation benchmark, DREAMBENCH++, which is introduced as a human-aligned benchmark for personalized image generation, addressing the limitations of current evaluations that are either misaligned with human judgment or time-consuming and expensive. The researchers systematically design prompts to make GPT models both human-aligned and self-aligned, enhanced with task reinforcement, and construct a comprehensive dataset of diverse images and prompts. By benchmarking 7 modern generative models, this paper demonstrates that DREAMBENCH++ achieves significantly more human-aligned evaluation, contributing to innovative findings in the field."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. I am confused about the motivation of building this new benchmark actually. As stated on line 084-085, \"can we comprehensively evaluate these models to figure out which technical route is superior and where to head?\", it's hard for reviewers to understand what distinguishes your benchmark compared with other existing benchmarks. Does DREAMBENCH++ assess more skills than existing T2I benchmarks? Does DREAMBENCH++ include more samples or higher quality images? A comparison table (DREAMBENCH++ v.s. existing benchmarks) is necessary to convince reviewers how the new benchmark can benefit T2I community.\n\n2. Nowadays, there are several existing T2I benchmarks, for example, DALL-EVAL [1] and HRS-Bench [2]. So a new benchmark alone may not contribute enough in this field. It would be good to contribute one technical contribution to somehow address the challenge which is emphasized in the newly built benchmark.\n\n[1] Cho, J., Zala, A., & Bansal, M. (2023). Dall-eval: Probing the reasoning skills and social biases of text-to-image generation models. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 3043-3054).\n[2] Bakr, E. M., Sun, P., Shen, X., Khan, F. F., Li, L. E., & Elhoseiny, M. (2023). Hrs-bench: Holistic, reliable and scalable benchmark for text-to-image models. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 20041-20053)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "How are the numbers calculated? Can you specify how you used Krippendorff’s alpha value? And where does the number of 54.1% and 50.7% come from? I noticed that the CLIP and DINO scores fairly correlate with human scores."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "It fills a gap in human-aligned and automated evaluation of the personalized text-to-image generation task by introducing LLMs. The dataset and evaluation metric is clearly described in detail."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work introduces a new benchmark for personalized text-to-image generation, including a dataset with a greater number of images and prompts compared to DreamBench, and an automatic evaluation method that assesses two key aspects of the task: (i) concept preservation and (ii) prompt following, using multimodal large language models like GPT. This new evaluation method addresses the limitations of previous DINO and CLIP and achieves higher human alignment."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The new dataset aims to include a larger variety of images but the source of these images is still relatively narrow (only 3 websites). The new dataset can still be biased due to the preference of these websites, and there is no sufficient evidence to show its diversity (only the visualization of t-SNE).\n2. It is claimed that the method is transferable to other foundation models. It is not straightforward because the method is specifically designed for GPT4 and no experiment showed its transferability.\n3. The dataset only contains 1 image for every instance. Although multiple images are claimed to be unnecessary, the results in Fig. 9 show serious overfitting when using only 1 reference image.\n4. There are other key aspects that need evaluation. For example, a common problem of fine-tuning-based methods is overfitting. We generally don't want the generated images to be too similar to the reference image except for the identity of the object. It is worth trying to evaluate the overfitting problem using GPT."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see the weakness part. In addition,\n\n- Why is the comparison scheme considered unsuitable in lines 198-199 if the scoring results are sensitive? Was \"comparing\" mistakenly written as \"scoring\"?\n\n- Stable Diffusion often generates noisy backgrounds effectively and frequently. Why did the authors judge it as unsuitable for personalized generation and remove it?\n\n- Are the example prompts shown in Figure 4 intended for personalized editing with those specific prompts?\n\n- In Table 1, the highest-scoring model is consistent across Human, GPT, DINO-I, CLIP-I, and CLIP-T. Can DREAMBENCH++ really be said to be more aligned with humans than DINO or CLIP?\n\n- Contrary to the authors' claim, Figure 6 does not clearly show that DINO or CLIP prefer shape and overall styles. Also, do the authors believe that favoring shape and overall styles is not aligned with human preference?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This paper proposes a metric that closely aligns with human preferences for evaluating personalized generated images and introduces a suitable benchmark. Compared to existing benchmarks, it constructs a much more diverse and extensive dataset and demonstrates evaluation results that are better aligned with humans than CLIP and DINO."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents DREAMBENCH++, a human-aligned benchmark for evaluating personalized image generation models using advanced multimodal models like GPT-4o for automated, human-like assessments. It addresses the misalignment and cost issues of traditional evaluation methods by focusing on prompt following and concept preservation with a diverse dataset. DREAMBENCH++ demonstrates superior alignment with human evaluations, offering a comprehensive and unbiased framework to advance personalized image generation research."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- For text-to-image generation models, there are many metrics, including TIFA[1], that improve upon CLIP. It is necessary to demonstrate alignment with humans not only compared to CLIP and DINO but also in comparison with these existing studies.\n\n[1] Hu, Yushi, et al. \"Tifa: Accurate and interpretable text-to-image faithfulness evaluation with question answering.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\n\n- The prompts used for GPT-4o are abstract. Studies that evaluate generated images using VQA typically ask absolute questions, such as the presence, location, or number of objects, to fully leverage GPT-4o’s performance. However, the prompting approach in this paper treats GPT-4o like a human, and the justification relies only on citations from other studies. In practice, some degree of hallucination occurs when GPT-4o evaluates images, which the paper overlooks, making this a significant drawback.\n\n-The scoring system from 0 to 4 is ambiguous. Some images may have around 10 objects, with 1 or 2 that do not match, while others may have all objects correct but differ in style. What are their scores? While GPT-4o might provide consistent evaluations, humans are less likely to be consistent. To address this, a larger user study should have been conducted, but only seven people participated, with each instance evaluated by at least two annotators. This means some images were reviewed by just two people, making the number of annotators insufficient."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Q1) Any design rationale on the score scale? Why not use a score in scale from 0 to 1? This would be more aligned with the range of traditional metrics (e.g. CLIPScore, LPIPS, DINO etc.)?\n\nQ2) How do the authors verify that the automated evaluation with GPT results are making sense (such that the reasoning fully reflects the score)?\n\nQ3) Please share the analysis of the costs and time needed for the human annotation, and how many data instance were annotated in total. Will the human annotated data be released? \n\nQ4) Authors might want to consider adding more models for comparison, including DisenBooth(ICLR 2024), EZIGEN (ArXiv 2024), \n\n---\nI believe this study will interest a broad audience. However, the contribution feels somewhat limited, and the discussion does not fully align with the claimed goal in the introduction. If W1 and W2 can be addressed, I would likely consider raising my score."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "S1) Fruitful discussion in diversity study, showcasing that models are good at certain types of generation. (e.g. all models generally perform better in Animal and Style due to sensitivity to facial details and diverse object categories). This aligns with the goal to figure out which model is superior in certain types of generation.\n\nS2) Well-organized study in showcasing the impact of prompt design including COT, ICL, Internal thinking etc.. when comes to automated evaluation with GPT."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper contributed a benchmark and a dataset for evaluating personalized image generation task, also an automated evaluating metric using GPT to address the issues of cost inefficient human evaluation, and also the lack of diversity in previous dataset/benchmark works. It included 7 models ( 3 more than prior works), and developed an semi-automated dataset creation pipeline. It also expanded dataset to 5x of images and 54x of prompts, and proposed the use of automated evaluating metric (GPT), with different settings studied (e.g. COT and ICL). The main goal of this work is to comprehensively evaluate these models to figure out which technical route is superior and where to head."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "W1) In Section 3.2 the authors claimed that \"Table 1 results show that DreamBench++ aligns better with humans than DINO or CLIP models...\", which is not very convincing. While it makes sense to show that the ranking order from GPT is more correlated to Human compared to DINO-I and CLIP-I, It is expected to show the correlation (Spearman / Pearson). I would suggest the authors to add a table regarding correlation between Human and GPT, and between Human and DINO/CLIP, supporting the claim. Also It seems a bit odd to use Krippendorff's Alpha to compare human ratings are other rating (GPT/DINO/CLIP) in Table 4. The two scales are inherently different and the meaning of ratings are different. Spearman / Pearson correlation would be a better metric for cross-scale reliability. Krippendorff's Alpha would be suitable for H-H as showing the inter-rater reliability.\n\nW2) The paper would have been much stronger if the authors included a section to discuss which model performs the best in certain types of generation and how is it associated with certain technical route. This will be more aligned to the purposed goal \"figure out which technical route is superior and where to head.\"\n\nW3) Minor confusion when reading the table 1 and 3 directly. For example, Table 3 does not mention what exactly are the score values are. Please extend table captions to explain what are the values in the revised version.\n\nW4) Please highlight the best model in each categories from Table 2 in the revised version. Maybe a leaderboard showcasing the best models in each category."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024dreambench,\ntitle={DreamBench++: A Human-Aligned Benchmark for Personalized Image Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=4GSOESJrk6},\nnote={under review}\n}"
},
"abstract": {
"value": "Personalized image generation holds great promise in assisting humans in everyday work and life due to its impressive function in creatively generating personalized content. However, current evaluations either are automated but misalign with humans or require human evaluations that are time-consuming and expensive. In this work, we present DreamBench++, a human-aligned benchmark that advanced multimodal GPT models automate. Specifically, we systematically design the prompts to let GPT be both human-aligned and self-aligned, empowered with task reinforcement. Further, we construct a comprehensive dataset comprising diverse images and prompts. By benchmarking 7 modern generative models, we demonstrate that \\dreambench results in significantly more human-aligned evaluation, helping boost the community with innovative findings."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"personalized image generation",
"subject-driven image generation",
"personalization",
"image generation",
"benchmarking",
"human-aligned evaluation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/072bac963ac74f7b8d4eb0a7f48a9df08824d6e5.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "DreamBench++: A Human-Aligned Benchmark for Personalized Image Generation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
4GT9uTsAJE | AdaGrad under Anisotropic Smoothness: A Fine-Grained Analysis | main | Active | Optimization theory;Convergence analysis;Stochastic optimization;Adaptive gradient methods | optimization | 5;6;6;8 | 3;3;5;4 | 3;3;3;4 | 3;3;3;3 | 3;3;4;3 | 6.25 | 3.75 | 3.25 | 3 | 3.25 | 0.345857 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Can please you better compare to Convergence Analysis of Adaptive Gradient Methods under Refined Smoothness and Noise Assumptions - D Maladkar, R Jiang, A Mokhtari - arXiv preprint arXiv:2406.04592, 2024?\nAlso, is much work required to generalize to Adam?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The main strength lies in its novel anisotropic assumptions, which align well with AdaGrad’s observed performance in high-dimensional settings. The experiments effectively validate the theoretical benefits, highlighting AdaGrad’s adaptability to large batch sizes and diverse data structures. For the rest it is a standard optimization analysis."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper provides a detailed analysis of the AdaGrad optimization algorithm under anisotropic smoothness assumptions, addressing gaps in theoretical convergence for large-scale tasks. It introduces a new anisotropic smoothness framework that better explains AdaGrad’s convergence speed, especially for large-batch training. Experiments on logistic regression and GPT-2 fine-tuning support these theoretical claims, showing AdaGrad’s improved performance over SGD."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "This kind of work always relies on assumptions which limits their applicability to the setting of interests, as neural networks. However, this is common and not really an issue. See also questions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Convexity is used in a large part of the paper. In many machine learning models, there are more parameters than data. In this case, local minima w* may not be isolated points. Instead, it can be a manifold. What is the impact of overparametrization to the results in this work?\n\n2. In Table 2, the authors list the coefficients and norms in the analytical results. It is also important to see how well the convergence of the loss (or gradients) are controlled by these coefficients and norms.\n\n3. For Table 4, since the work mainly discusses the convergence rate of Adagrad, it is better to show how the loss converges and how do the authors select hyperparameters."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The work provides an analysis result which may be the first one for Adagrad. This can be helpful for others to understand the potential of Adagrad and select optimizers for training tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The work provide analysis result for the convergence of Adagrad for training of machine learning models with large batch size, emphasizing the effects of anisotropic smoothness. The authors then compare the results with similar results for SGD and Adagrad-norm and point out the potential of Adagrad. In general, the work can be helpful to under stand the training process."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The numerical results are not sufficient to verify the assumptions and analytic results."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "See summary"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "See summary"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the convergence of Adagrad under anisotropic smoothness conditions. The main contributions are:\n1. Defining the anisotropic smoothness condition.\n2. Studying the convergence of Adagrad for convex and nonconvex problem under the anisotropic condition.\n3. Further nonconvex results for relaxed smoothness conditions. \n\nStrength:\n\nI think the paper is presented in a very clean and organized manner. The key results and discussions are clear. \n\nUp to my knowledge, although anisotropic smoothness were hinted across different setups, there is no very systematic study prior to this work. Therefore, I think the results here can be a valid contribution to optimization theory.\n\nWeakness:\n\n-The results are not surprising, and hence I didn't find the analysis / statements to be novel.\n\n\n-In addition to reviewing adagrad analyses, it would be helpful to review anisotropic analysis. Several related works that I could think of : analysis on coordinate descent; Nesterov's study on zero-order methods; adagrad's advantage on sparse gradients; Adam's convergence on infinity norm rather than l2 norm of gradients, etc.\n\nAlthough, the above results probably are not directly comparable, it would be good to summarize and discuss the differences.\n\nSome results that can make the work more impressive are listed below:\n\n-Lower bounds to justify when and why adaptive step, diagonal / full matrix adaptivity are sufficient / insufficient would be very interesting. \n\n-Given the analysis, can faster methods be proposed for neural nets?"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "See summary"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See the weakness section."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper analyzed the convergence rate of AdaGrad. Then, it showed that the convergence rate of SGD depends on $D_{\\infty}$, while the rate of AdaGrad depends on $D_2$. $D_2$ depends on the dimension of parameters, while $D_{\\infty}$ does not. Thus, this paper claimed that AdaGrad converges faster than SGD."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper analyzed the convergence rate of AdaGrad, showing that AdaGrad converges faster than SGD with respect to the dimension of parameters."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The authors compared the convergence rate of AdaGrad in Theorem 4.1 and the convergence rate of SGD, Eq. (6). The convergence rate in Eq. (6) depends on $D_2$, while the tighter convergence rate that depends on only $\\| x_0 - x^\\star\\|$ was more common. \n\n2. $D_\\infty$ does not depend on the dimension of the parameter. However, it is unclear whether $D_\\infty$ is smaller than $\\| x_0 - x^\\star\\|$.\n\n2. Theorem 4.1 assumes that $L_1 = 0$, which sounds a bit strong assumption. The reviewer feels that it would be better to provide the intuition why this assumption is necessary, at least in the Appendix.\n\n2. It was confusing which term corresponds to \"bias term\" in line 272."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024adagrad,\ntitle={AdaGrad under Anisotropic Smoothness: A Fine-Grained Analysis},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=4GT9uTsAJE},\nnote={under review}\n}"
},
"abstract": {
"value": "Adaptive gradient methods have been widely adopted in training large-scale deep neural networks, especially large foundation models. Despite the huge success in practice, their theoretical advantages over classical gradient methods with uniform step sizes across all coordinates (e.g. SGD) have not been fully understood, especially in the large batch-size setting commonly used in practice. This is because the only theoretical result that can demonstrate this benefit was obtained in the original paper of Adagrad for convex nonsmooth objective functions, which is insufficient for large batch algorithms. In this work, we attempt to resolve this gap between theory and practice by proposing a novel anisotropic generalized smoothness assumption and providing corresponding analysis of Adagrad. It is shown that under anisotropic smoothness and noise conditions, AdaGrad can achieve faster convergence guarantees in terms of better dimensional dependence than algorithms with uniform step sizes across all coordinates. Experiments in logistic regression and instruction following fine-tuning tasks provide strong evidence to support our novel assumption and theoretical analysis."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Optimization theory",
"Convergence analysis",
"Stochastic optimization",
"Adaptive gradient methods"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/31247ccd2ad927f3d1d5d3068cb4b77bcab0fe8c.pdf"
},
"presentation": null,
"primary_area": {
"value": "optimization"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "AdaGrad under Anisotropic Smoothness: A Fine-Grained Analysis"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |