id
stringlengths
10
10
title
stringlengths
3
179
track
stringclasses
1 value
status
stringclasses
3 values
keywords
stringlengths
2
2.39k
primary_area
stringclasses
21 values
author
stringclasses
501 values
authorids
stringclasses
501 values
aff
stringclasses
1 value
aff_domain
stringclasses
1 value
position
stringclasses
1 value
rating
stringclasses
355 values
confidence
stringlengths
0
19
soundness
stringclasses
642 values
contribution
stringclasses
596 values
presentation
stringclasses
782 values
rating_avg
float64
0
9
confidence_avg
float64
0
5
soundness_avg
float64
0
4
contribution_avg
float64
0
4
presentation_avg
float64
0
4
corr_rating_confidence
float64
-1
1
project
stringclasses
1 value
github
stringclasses
1 value
Review
listlengths
2
10
x1Okv4kbVR
MACPO: Weak-to-Strong Alignment via Multi-Agent Contrastive Preference Optimization
main
Active
weak-to-strong alignment;preference optimization
alignment, fairness, safety, privacy, and societal considerations
3;6;6;6
5;4;4;4
2;3;3;4
2;3;3;3
3;1;3;2
5.25
4.25
3
2.75
2.25
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see the weaknesses 2,3" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. The research problem is important and timely, as large language models are rapidly advancing and achieving near-human capabilities, and allowing them to surpass human performance with potentially only weak human supervision is a critical issue.\n2. The paper is well-written and clearly structured, with a good introduction and related work section, clearly described methodology, and well-organized experiments and results.\n3. The experimental results are promising. The scale of the experiments, diversity of evaluation, and the rich comparison are appreciated." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper explores the alignment problem when the LLM outperforms humans and human supervision is therefore weak. The authors propose a multiagent contrastive preference optimization (MACPO) framework to for weak-to-strong alignment by iteratively reinforcing unfamiliar positive behaviors while penalizing familiar negative ones. The authors evaluate their method on HH-RLHF and PKU-SafeRLHF datasets and show that MACPO improves alignment performance of strong students and weak teachers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the description of the MACPO framework in section 4.1, 4.2 is clear, the introduction of the framework in line 053 and line 196 is confusing. This is mainly due to the use of term 'behavior' and 'familiar' before they are defined. Moreover, the paragraph in line 196 is almost the same as the paragraph in line 061, which does not provide improved clarity.\n2. In Section 3.1, the formulation of the problem, the authors adopt an analogy setting where weak teachers are fine-tuned small models and strong students are big models 'initialized on weak labels generated by weak teachers for the held-out question set(line 261)'. In this case, it is not clear whether the strong student is truly strong, and it is not clear how this situation relates to the motivation problem in line 013. \n3. Computation efficiency discussion and comparison would be helpful as the proposed method requires multiple iterations of optimizing multiple models. Would the Strong-to-weak alignment and Self-alignment benefit from the longer training time (with the same effective computation as MACPO)?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In Section 3.1, the paper assumes that using weak teachers can incrementally improve alignment by reinforcing positive behaviors. Could the authors clarify what criteria define a \"weak teacher\" and how its effectiveness in improving alignment is measured compared to using strong agents from the beginning?\n\nThe paper describes using multiple agents for reinforcement but lacks details on whether adjustments are made dynamically based on each agent's performance. Are there mechanisms in place to adjust reinforcement strategies depending on agent success rates, and if so, how are these adjustments implemented to ensure optimal alignment?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The idea of using a multi-agent contrastive preference optimization approach to achieve weak-to-strong alignment is innovative. Most existing work focuses on direct fine-tuning or reinforcement learning methods to improve alignment, but these approaches cannot leverage the incremental learning power of weak teachers to iteratively refine alignment, limiting the model’s ability to generalize across various behaviors. This paper introduces a method that enhances alignment by reinforcing positive unfamiliar behaviors and penalizing negative familiar ones, as well as significantly improves alignment performance as more weak teachers contribute to training.\n\n2. The workflow is well-structured, as it combines contrastive preference optimization with a multi-agent setup to make the alignment process adaptable and iterative. This approach encourages behavior diversity among agents and further enhances the robustness and effectiveness of the alignment process.\n\n3. The experiments are extensive, with detailed analysis of the results. These experiments validate the effectiveness of the MACPO framework and demonstrate the scalability and adaptability of the method across models with different sizes and complexities​." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a multi-agent contrastive preference optimization (MACPO) approach for weak-to-strong alignment in LLMs. The authors utilize a multi-agent framework that iteratively improves both weak teachers and strong students by reinforcing unfamiliar positive behaviors and penalizing familiar negative ones. The experimental results show that MACPO achieves enhanced alignment performance over traditional methods, particularly as the number of weak teachers increases.\n\nThe main contributions of the paper include: 1) introducing the MACPO framework for weak-to-strong alignment; 2) incorporating mutual positive behavior augmentation and hard negative behavior construction to support iterative improvements; and 3) validating the proposed method's effectiveness through experiments on helpfulness and harmlessness alignment datasets​." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The experiments use only two alignment datasets, expanding the evaluation to include additional alignment tasks like toxicity detection or complex reasoning would provide a more comprehensive assessment of MACPO's generalizability.\n\n2. In Section 4.2 HARD NEGATIVE BEHAVIOR CONSTRUCTION, the paper does not clearly explain how agent interactions are managed or how behaviors are tracked across weak and strong agents throughout the alignment process." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the weakness, I'm happy to modify my rate based on the response of the authors and refer to other reviewers' comments." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The authors clearly define the problem, making it easy to understand the motivations behind their approach. This clarity in the problem statement helps to set a strong foundation for the rest of the work. And this paper opens up several avenues for future research, providing a solid foundation for follow-up studies. The authors discuss limitations and possible extensions, showing an awareness of the field's current needs and future directions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces MACPO, a framework for weak-to-strong alignment that enables strong language models to learn from weaker teachers. The authors claim that MACPO enhances positive behavior exchange and alignment refinement through iterative strategies, improving both teacher and student alignment, and results on benchmark datasets show MACPO’s effectiveness, with stronger alignment as the number of weak teachers grows." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The most concerning aspect is the statement in line 224: *“Since a high perplexity ppl of the positive strong student indicates weak labels may contain negative noises,”* which appears to form the foundational assumption of the entire framework. I have not encountered a statement like this before. In my opinion, alignment and preference learning should prioritize evaluation metrics as the basis for setting up metrics. Providing substantial evidence, such as a reference paper or experiments to demonstrate this metric’s validity and necessity, would be essential for this work. However, I could not find corresponding evidence, which may represent a fundamental weakness. At least the author needs to show a positive correlation between perplexity and answer quality (although this positive correlation is also very weak evidence to me, considering that correlation does not mean causation). However, the author did not even show the most basic evidence, which is very limited.\n\nI outline other concerns:\n\n1. I find the pipeline somewhat confusing. According to Algorithm 1, the proposed method first derives the “Strong Student” before proceeding with “Weak Teacher” training. Typically, in teacher-student learning paradigms, the teacher model is trained first and then provides supervision for the student model. If my understanding of the pipeline is correct, could the authors clarify the rationale behind this approach?\n\n2. It appears that the model sizes listed in Table 1 are inconsistent. For example, MACPO results are reported using both the 8B and 70B models, while other approaches only rely on the 8B model. This discrepancy may undermine the fairness of the comparisons.\n\n3. In Table 1, how are the final evaluation results derived based on a third-party reward model? With multiple gold models available in RewardBench, it would be helpful for the authors to explain their choice of this particular model over others.\n\n4. For HH-RLHF training, it’s unclear why the authors opted to use only 10K samples, which represent less than 10% of the original dataset. Do the final results depend heavily on this specific subset? This choice should be validated, as it raises concerns about the generalizability of the experimental results.\n\n5. Are the experiments restricted solely to alignment for helpfulness and harmlessness? It would be beneficial to extend these experiments to other tasks, such as Reddit TL; DR. Even if the approach does not perform well on such tasks, presenting these results could offer valuable insights." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "If I understand correctly, when constructing positive behaviors, one selection criteria is that the weak labels should have low perplexity when evaluated on the strong student. However, doesn’t this low perplexity mean that this weak label is familiar to the strong student? Because low perplexity indicates that this weak label is likely to be generated by the strong student as well. This seems to contradict the goal of generating unfamiliar positive behavior." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The proposed approach is intuitive and partially solves the problem of collapsing by learning through self-generated data. \n\n2. The ablation study is comprehensive, validating the claims. The authors clearly illustrate the benefits brought up by unfamiliar positive behavior and familiar negative behavior." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes Multi-Agent Contrastive Preference Optimization (MACPO), which aims at letting strong students and weak teachers to learn from each other by encouraging unfamiliar positive behaviors and penalizing familiar negative behaviors. The proposed algorithm achieves better performance compared with other weak-to-strong alignment methods on helpfulness and harmlessness benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The writing can be significantly improved, especially for section 4. The notation often comes with 4 to 5 super and subscripts, which is very difficult to follow. I highly recommend the authors clean this part up.\n\n2. The proposal is intuitive, but the concept of “familiar” and “unfamiliar” is not well defined or discussed. Does familiar mean self-generated content and unfamiliar mean content generated by other models? What are the measures you can use for determining familiarity?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a multi-agent contrastive preference optimization (MACPO) framework to facilitate weak teachers and strong students learn from each other to improve weak-to-strong alignment performance." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024macpo,\ntitle={{MACPO}: Weak-to-Strong Alignment via Multi-Agent Contrastive Preference Optimization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x1Okv4kbVR},\nnote={under review}\n}" }, "abstract": { "value": "As large language models (LLMs) are rapidly advancing and achieving near-human capabilities, aligning them with human values is becoming more urgent. In scenarios where LLMs outperform humans, we face a weak-to-strong alignment problem where we need to effectively align strong student LLMs through weak supervision generated by weak teachers. Existing alignment methods mainly focus on strong-to-weak alignment and self-alignment settings, and it is impractical to adapt them to the much harder weak-to-strong alignment setting. To fill this gap, we propose a multi-agent contrastive preference optimization (MACPO) framework. MACPO facilitates weak teachers and strong students to learn from each other by iteratively reinforcing unfamiliar positive behaviors while penalizing familiar negative ones. To get this, we devise a mutual positive behavior augmentation strategy to encourage weak teachers and strong students to learn from each other’s positive behavior and further provide higher quality positive behavior for the next iteration. Additionally, we propose a hard negative behavior construction strategy to induce weak teachers and strong students to generate familiar negative behavior by fine-tuning on negative behavioral data. Experimental results on the HH-RLHF and PKU-SafeRLHF datasets, evaluated using both automatic metrics and human judgments, demonstrate that MACPO simultaneously improves alignment performance of strong students and weak teachers. Moreover, as the number of weak teachers increases, MACPO achieves better weak-to-strong alignment performance through more iteration optimization rounds." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "weak-to-strong alignment", "preference optimization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/68485b2c007c09bae088cc313edc0be469f1b8e7.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "MACPO: Weak-to-Strong Alignment via Multi-Agent Contrastive Preference Optimization" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x1SfON9HvT
Diffusion Modulation via Environment Mechanism Modeling for Planning
main
Active
Reinforcement Learning;Offline Reinforcement Learning;Planning;Diffusion Model
reinforcement learning
3;3;3;5
4;5;5;3
2;2;3;2
2;2;2;3
3;2;4;3
3.5
4.25
2.25
2.25
3
-0.870388
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. The authors need to include Decision Diffuser as the baseline.\n\n2. Why there are only part of the environments in the ablation study in Table 3." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The presentation of this paper is great and easy to follow.\n2. The idea of this paper is simple and clear. \n3. The experimental result looks good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a new diffusion-based RL planning method named DMEMM. Unlike previous methods that use fixed isotropic variance and disregard the rewards which may lead to a mismatch between generated trajectories and those desirable for RLs, DMEMM explicitly incorporates transition-based, reward-based, and reward-aware diffusion modulation loss. By doing so, DMEMM enhances both the coherence and quality of generated trajectories, achieving state-of-the-art performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The tested diffusion-based offline RL planning algorithm missed the Decision Diffuser[1], which performance is much better than the original Diffuser. Also, it's important that the Decision Diffuser doesn't have the issue of inaccurate transition as it only predicts the state sequence and calculates the actions by inverse dynamics.\n\n2. The motivation of this paper is ambiguous. I assume that the initial motivation is to improve the consistency of the dynamic by explicitly incorporating the loss of the difference between the real states after transition and the generated states. However, the initial idea of diffusion models is to measure the distribution of the trajectories, which has already included the dynamics. I'm wondering if adding such a loss can really improve the consistency of dynamics. The authors need to seriously discuss this problem.\n\n3. Same as previous papers, this paper also uses the paradigm of autoregressive generation, where each time only the first action will be executed. In this case, generating accurate transitions seems to be less important. \n\n4. The paper is lack of novelty, the idea of adding a loss on the transitions is an incremental work.\n\n\n[1] Is Conditional Generative Modelling All You Need for Decision-Making? Anurag A., et al. ICLR 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Will the authors publish the source code?\n\nI do not have further questions since the paper was written in a very straightforward way and easy to understand. To improve the work, please see the comments above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper was written in a very straightforward way, making it easy to follow and understand\n2. The being discussed problem is important and the paper proposed a practical method to solve it." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focused on offline reinforcement learning (offline RL) with diffusion models as planner. Following Diffuser (Janner et al. 2022), a series of studies have been proposed the improved the performance. This paper discussed the problem of generating trajectories with consistency between transitions to ensure coherence. The being proposed methods includes a transition-based diffusion modulation loss, and dual reward guidance. Using the mixed loss function, the being proposed method was tested on D4RL locomotion, showing SOTA performance compared to previous diffusion planning algorithms; and on Maze2D, showing similar performance with previous best." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The being tested benchmarks are too narrow, recent diffusion planning studies usually included other D4RL environments AntMaze, Franka Kitchen, etc. But this paper only examined D4RL locomotion, thus it remains unclear whether the being proposed method work well in general tasks.\n\n2. While the being proposed method performs well in D4RL locomotion compared with previous diffusion planning methods (avg. normed. return = 87.9, Table 1), it should be discussed that the previous best diffusion model-based method on this benchmark was from diffusion policy [1] (avg. normed. return = 88.0 [1], and 89.0 in [2]).\n\n3. When compared with HD-DA, the performance on the Maze2D (Table 2) is worse when the size becomes large. Since I do not see other experimental results on planning, I am skeptical about the scalability of the being proposed method when task becomes more challenging.\n\nOverall, the paper provides limited insights. There lack both theoretical and empirical insights about the problems being touched, due to limited variety of benchmarks and limited analysis (with only a very formulaic ablation study), making this paper not qualified for the ICLR bar. \n\n## Minor problems\n\n- In Tables 1 and 2, error bar should be explained whether it is STD or SEM or confidence interval.\n- Many related works on diffusion planning on offline RL are not discussed, see [3] for reference\n- No reproducibility statement\n\n### Ref\n\n[1] Wang, Zhendong, Jonathan J. Hunt, and Mingyuan Zhou. \"Diffusion Policies as an Expressive Policy Class for Offline Reinforcement Learning.\", ICLR 2023.\n\n[2] Dong, Zibin, et al. \"CleanDiffuser: An Easy-to-use Modularized Library for Diffusion Models in Decision Making.\", NeurIPS 2024 Track on Datasets and Benchmarks.\n\n[3] Zhu, Zhengbang, et al. \"Diffusion models for reinforcement learning: A survey.\" arXiv preprint arXiv:2311.01223 (2023)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The paper needs to explain how diffusion models can be applied within imitation learning [1, 2], a subset of reinforcement learning.\n\n2. The experiments, limited to locomotion and Maze2D, are insufficient. Adding manipulation and image-based tasks would strengthen credibility, especially given diffusion models’ origins in image domains.\n\n3. How does DMEMM handle environments with highly stochastic transitions or non-stationary reward functions, and what are the main challenges it faces in such contexts?\n\n4. What are the primary computational trade-offs introduced by incorporating dual-guided sampling in DMEMM, and how does it perform compared to standard model-based and model-free RL approaches in terms of efficiency?\n\n5. To what extent is DMEMM sensitive to the weighting of transition- and reward-based modulation losses, and what practical guidelines can be provided for setting these parameters across various RL environments?\n\n6. Can the proposed transition-guided sampling in DMEMM be further adapted to handle hierarchical or multi-step decision-making tasks, and if so, what modifications might enhance its applicability?\n\nI would be inclined to raise the score if you clear up my cnofusion and discuss diffusion model's applicability to imitation learning.\n\n[1] Tim Pearce, Tabish Rashid, Anssi Kanervisto, David Bignell, Mingfei Sun, Raluca Georgescu, Sergio Valcarcel Macua, Shan Zheng Tan, Ida Momennejad, Katja Hofmann, and Sam Devlin. Imitating human behaviour with diffusion models. In International Conference on Learning Representations, 2023.\n\n[2] Shang-Fu Chen, Hsiang-Chun Wang, Ming-Hao Hsu, Chun-Mao Lai, and Shao-Hua Sun. Diffusion model-augmented behavioral cloning. In International Conference on Machine Learning, 2024." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. **Motivation and intuition**: The motivation for improving trajectory generation in RL through transition-consistent diffusion models is convincing.\n\n2. **Novelty**: The way of how to utilize transition dynamics and cumulative reward modulation in the diffusion model is intuitive and novel. This paper presents an effective way to implement this idea.\n\n3. **Technical contribution**: The transition-based modulation loss for enforcing transition consistency seems particularly effective for producing coherent trajectories in complex environments.\n\n4. **Ablation study**: The ablation studies are comprehensive, breaking down each framework component and showing its role in improving trajectory quality." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the challenge of improving diffusion model-based trajectory generation for offline reinforcement learning (RL) by incorporating transition dynamics and reward functions through a new framework, Diffusion Modulation via Environment Mechanism Modeling (DMEMM). Experiments show that DMEMM achieves state-of-the-art results on D4RL and Maze2D tasks, with ablation studies highlighting the effectiveness of each framework component. This work addresses a key gap by enhancing trajectory coherence with real-world constraints in diffusion-based RL planning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Clarity**: The meaning of some notations, such as the subscripts and superscripts on τ, is not clearly explained. While the intended meaning can be inferred after careful review, a clearer definition of each notation would improve the paper's readability.\n\n2. **Completeness**: The paper lacks a discussion on the application of diffusion models within imitation learning [1, 2], a subset of reinforcement learning, which is necessary to clarify the advantages of using diffusion models in planning.\n\n3. **Experiment details**: The experiments are not sufficient, as they focus only on locomotion and Maze2D environments. Adding other environments, such as manipulation and image-based tasks, would enhance the credibility of the results. Given that diffusion models originated in image domains, testing on image-based environments would be valuable.\n\n4. **Reproducibility**: The authors did not provide code nor commit to releasing it upon acceptance, which limits the reproducibility of the results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is well-written and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a model-based offline diffusion RL method. By adding auxiliary loss about state transitions and cumulative rewards to the conventional diffusion loss, the authors claim to solve the problem of generating trajectory discontinuity when diffusion is applied to reinforcement learning. The authors validate the proposed method on D4RL and claim that their method achieves SOTA." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The PRELIMINARIES are not convincing enough. The author claims in the PRELIMINARIES that reinforcement learning is modeled as a quadruple $(S,A,T,R)$ in (Sutton & Barto, 2018), whereas in general, we model reinforcement learning as a quintuple $(S, A,T,R,\\gamma)$. The objective of reinforcement learning policy optimization is generally to maximize the cumulative \\textbf{discount} reward, and this paper does not mention anything about reward discount $\\gamma$. Given that this is significantly different from the normal research paradigm of reinforcement learning, if this is by design, the author should explain the purpose of doing so. If this is an oversight, I hope the author will correct it in future research.\n\n2. The motive is unclear. the motivation presented by the authors is that \"the use of fixed isotropic variance and the disregard for rewards may lead to a mismatch between generated trajectories and those desirable for RL \". I don't understand how this problem relates to the method proposed by the author. The authors claim that this problem comes from (Wu et al., 2019). However, after browsing the paper, I did not find that the author of the original article claimed this problem. If this is proposed for the first time, please provide more detailed analysis and examples to prove that it does exist and will have an impact on offline reinforcement learning, and it is necessary to prove in the experiment that the method proposed in this paper will indeed solve or link this problem.\n\n3. The novelty of the method is limited. Since the reverse process of the diffusion model is to predict the denoising error, I do not quite understand how the formula (9) in the paper is attached to the loss of the diffusion model. From the formula (10) in the paper, I believe that the reward model fits the Q function in reinforcement learning (of course, there is no discount term $\\gamma$). Formula (11) is the Q-weighted fitting error. Overall, whether it is learning state transition or using the Q function, it can not impress me.\n\n4. The results are not convincing. The authors' experimental results on D4RL did not live up to their claims of SOTA. Many studies using diffusion models for offline reinforcement learning have provided more competitive experimental results, such as [1, 2]. In addition, \"Notably, both DMEMM-w/o-λtr and DMEMM-w/o-tr-guide exhibit significant performance drops,\" the authors said in the ablation experiment. emphasizing the crucial role of incorporating transition dynamics in our method.\" However, from the experimental results in Table 3, The difference between dMEMm-w /o-λtr and dMEMm-w /o-tr-guide and DMEMM is insignificant. If the author sticks to his point, I hope the author can come up with more convincing evidence, such as significance testing.\n\nReference:\n\n[1] Chen H, Lu C, Ying C, et al. Offline Reinforcement Learning via High-Fidelity Generative Behavior Modeling[C]//The Eleventh International Conference on Learning Representations.\n[2] Wang Z, Hunt J J, Zhou M. Diffusion Policies as an Expressive Policy Class for Offline Reinforcement Learning[C]//The Eleventh International Conference on Learning" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024diffusion,\ntitle={Diffusion Modulation via Environment Mechanism Modeling for Planning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x1SfON9HvT},\nnote={under review}\n}" }, "abstract": { "value": "Diffusion models have shown promising capabilities in trajectory generation for planning in offline reinforcement learning (RL). However, conventional diffusion-based planning methods often fail to account for the fact that generating trajectories in RL requires unique consistency between transitions to ensure coherence in real environments. This oversight can result in considerable discrepancies between the generated trajectories and the underlying mechanisms of a real environment. To address this problem, we propose a novel diffusion-based planning method, termed as Diffusion Modulation via Environment Mechanism Modeling (DMEMM). DMEMM modulates diffusion model training by incorporating key RL environment mechanisms, particularly transition dynamics and reward functions. Experimental results demonstrate that DMEMM achieves state-of-the-art performance for planning with offline reinforcement learning." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Reinforcement Learning", "Offline Reinforcement Learning", "Planning", "Diffusion Model" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/ebf4a2e27b8e3a83646101628c76704bf83ff4fe.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Diffusion Modulation via Environment Mechanism Modeling for Planning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x1nlO1d1iG
CogMath: Evaluating LLMs' Authentic Mathematical Ability from a Cognitive Perspective
main
Active
Mathematical Reasoning;Large Language Models
foundation or frontier models, including LLMs
3;5;5
3;4;3
2;3;2
1;3;3
2;3;3
4.333333
3.333333
2.333333
2.333333
2.666667
0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This work includes a CogMath framework that could be helpful in evaluating the math reasoning abilities of LLMs more robustly, and covers several dimensions that may introduce perturbation to the stability of reasoning." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work introduces a CogMath framework that consists of nine agents to evaluate the mathematical reasoning ability of large language models from the perspective of comprehension, problem solving and solution summarization. \n\nSpecifically, in the comprehension stage, the agents attempt to rephrase, disrupt (permute word ordering), remove condition and add condition of the original question. In the problem solving stage, the agents attempt to conduct analogical reasoning, numerical transformation and knowledge refinement (reshape the semantics of 'half') of the original question. In the solution summarization stage, agents attempt to question the information in the intermediate steps of the solution, and conduct backward reasoning of the question. The experimental results demonstrate that the abilities of current strong LLMs on GSM8K and MATH are overestimated by 30-40% by the calibration of those agents. Besides, CogMath may not serve as an effective prompt-based reasoning enhancement and the problem difficulty and lengths in MATH are negatively correlated with the pass rates in CogMath." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "It is not easy to imagine how a handful of agents included in CogMath can be generalized to more challenging questions. For example, how does the backward reasoning being feasible in mathematical proofs. The knowledge redefinition only limits its scope to 'half' and the questions contain the word, where works like FRoG (Li et al. 2024) includes richer quantifier-based variants of GSM8K. It is not surprised to see that the current figures of LLMs in reasoning is not a stable display, but attempts like one-time numerical transformation might make it more robust, but only marginally. Besides, I didn't find enough evidence regarding efforts to make sure the agents faithfully finish their jobs.\n\nThis work also collects MExam. However, I know nothing about it from the contents.\n\n\nReference\n\n[1] FRoG: Evaluating Fuzzy Reasoning of Generalized Quantifiers in Large Language Models" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "How can the inquiry agents ensure that they generate good questions that meet the dimension requirements? Is there any filtering process involved?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Comprehensive evaluation across nine dimensions can enhance the current math benchmarks.\n- Extensive experiments with multiple representative LLMs demonstrate the limitations of current LLMs on math reasoning capabilities." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a multi-agent framework, CogMath, to evaluate the mathematical abilities of LLMs from a cognitive perspective. CogMath breaks down mathematical problem-solving into 3 stages: problem comprehension, problem solving, and solution summarization with nine evaluation dimensions. CogMath generates test samples cross multiple cognitive perspectives using a multi-agent system and reveals that current LLMs' math capabilities are overestimated, which demonstrates the strengths and weaknesses of different models at different evaluation dimensions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- CogMath uses LLMs to construct test samples and evaluate the model-generated answers across multiple dimensions. However, the correctness of generated test cases and the evaluation quality can be a major concern. It would be helpful to add human evaluation on the generated test samples and the judging process.\n- The performance degradation can be a regular case instead of an interpretation of overestimation, as the CogMath test questions can be harder than the original questions after processing across multiple dimensions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the weaknesses section" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tA comprehensive and scientific benchmark that deeply investigates the flexible reasoning of LLMs is essential for the community.\n2.\tThe authors consider nine dimensions across problem comprehension, solving, and solution summarization, which aids in identifying the main challenges faced by current models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a benchmark for comprehensively evaluating the mathematical abilities of LLMs by examining three cognitive reasoning stages: problem comprehension, problem solving, and solution summarization. Experiments indicate that we may overestimate the capabilities of current LLMs, primarily due to their excessive imitation of superficial reasoning patterns." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tWhile this work addresses cognitive mathematical dimensions comprehensively, I have a question regarding the motivation. Why do the authors believe that previous works introducing perturbations into existing benchmarks are task-specific?\n2.\tMore details are needed about the dataset construction procedure, including how the judge agent is used to ensure the quality of $q_i$, how multiple reference agents are negotiated to finalize the answer, and which foundation models are utilized behind these agents.\n3.\tFor Figure 2, it would be clearer to replace the dimension index with the dimension name. Additionally, for Figure 3, it would be more straightforward if each group of bars represents the same model.\n4.\tIn Section 4.6, the current experiment uses a one-shot setting. Have the authors considered a nine-shot setting, where each demonstration represents one dimension?\n5.\tIn Section 4.7, how are the five tiers of difficulty defined?\n6. In Table 1: Should $21 be 21?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024cogmath,\ntitle={CogMath: Evaluating {LLM}s' Authentic Mathematical Ability from a Cognitive Perspective},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x1nlO1d1iG},\nnote={under review}\n}" }, "abstract": { "value": "As large language models (LLMs) exhibit potential in solving complex mathematical tasks, increasing attention has been directed toward constructing benchmarks to evaluate their mathematical capabilities. However, existing benchmarks are either limited to specific task types (e.g., long-text problem understanding) or rely solely on a coarse measure of answer accuracy, making them insufficient for assessing a model's authentic mathematical proficiency. In this paper, we propose CogMath, which provides a comprehensive assessment of LLMs' mathematical abilities based on human cognitive processes. Specifically, inspired by cognitive theories, CogMath formalizes the reasoning process into 3 stages that align with human cognition: problem comprehension, problem solving, and solution summarization, and encompasses 9 fine-grained evaluation dimensions from perspectives such as numerical calculation, knowledge, and counterfactuals. In each dimension, to carry out a scientific evaluation, we develop an ``Inquiry-Judge-Reference'' multi-agent system, where the Inquiry agent generates inquiries that assess LLMs' mastery from this dimension, the Judge agent ensures the inquiry quality, and the Reference agent provides correct responses for comparison with the LLMs' actual performances. A LLM is considered to truly master a problem only when excelling in all inquiries from the 9 dimensions. In experiments, we evaluate 7 mainstream LLMs by applying CogMath to three benchmarks, which cover the full K-12 mathematical curriculum. The results reveal that the authentic mathematical capabilities of current LLMs are overestimated by 30-40%. Moreover, we locate their strengths and weaknesses across different stages/dimensions, offering constructive insights to further enhance their reasoning abilities." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Mathematical Reasoning", "Large Language Models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/00361965dee521cf43a8528825eac1c5beed233c.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/ef81be3b6c1780541e7d8b562f78f344829894f8.zip" }, "title": { "value": "CogMath: Evaluating LLMs' Authentic Mathematical Ability from a Cognitive Perspective" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x1uv2gdjKV
Inference-Time Alignment of Diffusion Models with Direct Noise Optimization
main
Active
Diffusion Models;Inference-Time Alignment;Optimization;RLHF
generative models
5;5;5;6
3;3;2;4
3;2;3;3
2;2;2;3
2;3;3;3
5.25
3
2.75
2.25
2.75
0.816497
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See the weakness section." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The idea is simple and neat. Directly optimizing the noise towards samples of maximum reward value is technically sound. The paper presents the idea in a clear way.\n\n- The paper considers both differentiable and non-differentiable reward settings, making direct noise optimization a quite practical method to use.\n\n- The paper provides some interesting theoretical insights to justify the drawback of direct noise optimization. Moreover, it provides a solution to avoid this drawback with a probability regularization." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a method called direct noise optimization (DNO) for maximizing some reward function for the generated samples. Different from finetuning and RL methods, DNO operates as a form of testing-time optimization. DNO is also extended to the non-differentiable reward setting by leveraging zero-order optimization techniques. Some theoretical analyses are provided to account for its empirical effectiveness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- As the paper mentioned, directly optimizing the input noise can result in degenerated solutions, which the authors describe as OOD reward hacking. I think the problem, despite being alleviated by some additional regularization, will fundamentally limit the generation quality. This is conceptually similar to the problem of adversarial examples. Moreover, regularizing the noise to lie in a high-probability region may alleviate the reward hacking problem, but it also limits the ability to maximize the reward function. How can the effectiveness of this probility regularization be measured in practice? How to set the hyperparameter $\\gamma$ in practice?\n\n- Zero-order optimization is applied to the non-differentiable reward setting, but I am highly skeptical about its empirical effectiveness, as the zero-order optimization is known to be difficult to converge, not to mention that the noise optimization could be very challenging and non-convex/non-smooth.\n\n- The performance gain of DNO over the other baselines also seems be quite marginal from Table 1. For the other baseline methods, there are many hyperparameters that can be tuned. I think to better illustrate the performance gain, it will be good to compare the performance with different finetuning steps (or different inference-optimization steps)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to Weakness part." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "+ This study considers aligning diffusion models in the inference time, alleviating the computational cost of aligning the model.\n+ The authors identify and address the problem with reward hacking in optimization.\n+ The proposed method is applicable for both differentiable and non-differentiable reward functions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on the alignment of diffusion models by optimizing noises in the sampling process. The authors propose to optimize noises at each sampling step to improve the reward value, rather than optimize the model parameters. To this end, they discuss the effectiveness of the proposed DNO method, and address the reward hacking problem in optimization via a regularization term. Experiments show that DNO improves the reward value of generations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- It seems that the idea of optimizing noises has been proposed by (Wallace et al., 2023b; Ben-Hamu et al., 2024; Novack et al., 2024; Karunratanakul et al., 2023). What is the essential difference between DNO and these prior works, except that DNO optimizes noises at each step?\n- The assumption of Theorem 1 is not convincing. I am not sure how to understand the smoothness from Figure 4 (Tang et al., 2024a). Although the noise-to-sample mapping is smooth, the reward function is usually complicated, especially for reward functions related to human preference. Is there any direct justification for the smoothness of $r\\odot M$?\n- The faster convergence speed of DNO is not well supported by their theoretical analysis. Equation (5) only demonstrates the final performance of DNO with SDE may be better than DNO with ODE, but does not justify the optimization speed.\n- In equation (10), there should be a $1/\\mu$ term.\n- There are several concerns about the experiment in Table 1. First, it is unclear what prompts and how many generations are used for evaluation. Second, for DNO, the annotated time is confusing. Does ``1 min’’ mean that generating one image using DNO costs 1 minute? Third, how about the performance of DNO with ZO-SGD, hybrid-1, and hybrid-2?\n- The evaluation for reward hacking based on $P(z)$ is limited. $P(z)$ only indicates the distribution of initial noise $z$ but cannot describe the distribution of the final generated image. I’d like to see more comparisons between images generated by different methods, either quantitatively or qualitatively.\n- Which experiment supports the claim in Line 452 `` while it prevents the test metrics, i.e., the HPS score and Pick Score, from decreasing throughout the process'’?\n- Is the proposed DNO also effective for more complex prompts? Have the authors compared images generated by different methods on both simple and complex text prompts?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. In fact, the model is repeatedly applied to intermediate noises too. Why there is no regularization of out-of-distribution intermediate noise inputs to the model?\n2. Why there is no comparison to LGD and other baseline in Section 5.1? The performance of LGD is surprisingly bad in Section 5.2, even worse than SD 1.5. The authors argue that the reason is the complex reward used in 5.2, and I wonder whether LGD has a better performance in similar reward functions like 5.1." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper provides a thorough treatment of DNO, including theoretical analysis, practical implementation details, and solutions to key challenges (OOD reward hacking, non-differentiable rewards). The probability regularization technique using concentration inequalities is particularly novel and well-motivated since it addresses important practical concerns without no fine-tuning, works with arbitrary reward functions, can handle non-differentiable rewards, etc.\n2. The experimental evaluation is extensive and convincing, showing that DNO can match or exceed the performance of fine-tuning methods while requiring only inference-time optimization. The authors test on multiple reward functions and provide detailed ablation studies." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes Direct Noise Optimization (DNO), a novel approach for aligning diffusion models with continuous reward functions at inference time. The key idea is to optimize the injected noise during the sampling process to maximize desired reward functions, without needing to fine-tune the model parameters. The authors develop theoretical foundations for DNO, introduce probability regularization to prevent out-of-distribution samples, and extend the method to handle non-differentiable rewards. Extensive experiments demonstrate that DNO can achieve state-of-the-art reward scores within reasonable inference time budgets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Although the paper proposes a regularization technique to provide overfit, the resulting images still seem to overfit. Besides, the regularization only focuses on constraining the initial noise distribution to the high probability area. Also see question 1\n2. Lack of baseline results: There is no inference time cost comparison with LGD since it's an important inference-time method as the baseline. Besides, a more relevant training method is DRAFT [1], which also uses direct backpropagation to optimize the reward function. It also faces a similar issue of reward hacking as the proposed method.\n\n\n[1] Directly Fine-Tuning Diffusion Models on Differentiable Rewards\nKevin Clark, Paul Vicol, Kevin Swersky, David J Fleet" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How does DNO perform when used with other diffusion sampling methods besides DDIM? From my perspective of view, I think DDIM is the easiest and most direct way to model the mapping from noise to sample, i.e. $M_\\theta(z)$. If using other schedulers other than DDIM, I wonder how DNP will work to simulate $M_\\theta$. Moreover, if the algorithm leverages a differentiable reward function for optimization, the gradients are passed backward through all diffusion steps. I assume that would bring significant computational costs comparable to direct reward propagation. Can the authors comment on this?\n\n2. I don't see how eq (2) brings a \"prompt-agnostic\" method. For example, when aligning stable diffusion, is the optimized initial noise a \"universal\" optimal value for all prompts?\n\n3. Currently, there are already many works aimed at aligning diffusion models. Roughly, they can be classified into two categories,\n (1) directly fine-tune the diffusion models. See: https://arxiv.org/abs/2407.13734, https://arxiv.org/abs/2410.08315. While DNO includes comparisons with alignprop and ddpo in Table 1, comparisons with many later RL-based finetuning methods that alleviate reward hacking issues are missing. I wonder how DNO would work if compared to them. \n (2) purely inference time techniques. While DNO does not fine-tune diffusion models, DNO still needs to optimize through the diffusion pipeline to get optimized noise. The computational costs are still heavy. Here are some works that are purely inference-time and prompt-agnostic: https://arxiv.org/abs/2410.08134, https://arxiv.org/abs/2408.08252.\n\nDiscussing and comparing these works quantitatively (or qualitatively) is important for positioning DNO. However, it might also be true that these works are concurrent with this submission. Finally, at least qualitative discussions are needed.\n\n4. I assume the numbers in Table 1 do not consider reward hacking which is problematic for a method that claims to alleviate reward hacking. This is because alignprop would significantly suffer from mode collapse when aesthetic score > 8.5 from my experience. But Table 1 reports 8.9 for alignprop. Can the authors comment on this?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. DNO's main feature is training-free. Current researchers have been more interested in inference-time techniques that do not require costly fine-tuning. Therefore, this work stands for an important and attentive direction of aligning generative models.\n2. The integration of probability regularization to mitigate out-of-distribution reward hacking is interesting.\n3. The authors applied DNO across various reward functions, highlighting its adaptability.\n4. DNO is compared rather comprehensively with some baselines on a variety of reward functions, which is good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the alignment of diffusion models to downstream tasks with continuous reward functions. It proposes Direct Noise Optimization (DNO) as a tuning-free, inference-time method for adjusting generated samples to maximize target rewards. Moreover, in the face of reward hacking (aka over-optimization), the paper introduces a probability regularization for alleviation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While theoretical parts of the probability regularization are comprehensive, I find those parts hard to parse. Could the authors highlight the key message of such regularization, and why would it work from a high-level perspective?\n2. Reward hacking is known to be a key concern in aligning LLM/diffusion models. The authors attempt to alleviate hacking through a regularize, which is good. However, the evaluation way remains problematic. In Fig 2, the authors presented that adding regularize sacrifices little performance for retraining fidelity. However the authors did not compare regularized DPO with other methods. In the end, the observation of Fig 2 is not a surprise to me because adding regularization almost surely leads to a tradeoff of performance and fidelity. There exist some works showing that simply adding small regularization to align-prop would greatly alleviate reward hacking. Therefore, Fig. 2 does not provide any further impressive information.\n3. One primary concern is that this work lacks many comparisons with existing fine-tuning-based and inference-based techniques that appear earlier or concurrent to this work. See more details below." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This work studies the problem of inference-time alignment of diffusion generative models with downstream objectives" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024inferencetime,\ntitle={Inference-Time Alignment of Diffusion Models with Direct Noise Optimization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x1uv2gdjKV},\nnote={under review}\n}" }, "abstract": { "value": "In this work, we focus on the alignment problem of diffusion models with a continuous reward function, which represents specific objectives for downstream tasks, such as increasing darkness or improving the aesthetics of images. The central goal of the alignment problem is to adjust the distribution learned by diffusion models such that the generated samples maximize the target reward function. We propose a novel alignment approach, named Direct Noise Optimization (DNO), that optimizes the injected noise during the sampling process of diffusion models. By design, DNO operates at inference-time, and thus is tuning-free and prompt-agnostic, with the alignment occurring in an online fashion during generation. We rigorously study the theoretical properties of DNO and also propose variants to deal with non-differentiable reward functions. Furthermore, we identify that naive implementation of DNO occasionally suffers from the out-of-distribution reward hacking problem, where optimized samples have high rewards but are no longer in the support of the pretrained distribution. To remedy this issue, we leverage classical high-dimensional statistics theory to an effective probability regularization technique. We conduct extensive experiments on several important reward functions and demonstrate that the proposed DNO approach can achieve state-of-the-art reward scores within a reasonable time budget for generation." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Diffusion Models", "Inference-Time Alignment", "Optimization", "RLHF" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/403ffe02031b1810b750bb8bf3267a63f7c512da.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/862145dca24fcbc59a503e3b4521fe03c3274513.zip" }, "title": { "value": "Inference-Time Alignment of Diffusion Models with Direct Noise Optimization" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x1yOHtFfDh
SportU: A Comprehensive Sports Understanding Benchmark for Multimodal Large Language Models
main
Active
Multimodal Large Language Models;Sports Understanding;Benchmark
datasets and benchmarks
5;5;6;6
4;4;4;4
2;1;4;2
2;2;4;2
2;2;4;2
5.5
4
2.25
2.5
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please check weakness for details." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. The proposed dataset could be useful for the community.\n2. Both close and open-sourced models are evaluated.\n3. Metrics are studied with human verification." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces SPORTU, a new benchmark designed to evaluate the capabilities of Multimodal Large Language Models (MLLMs) in sports understanding and reasoning. SPORTU consists of two components: SPORTU-text, focusing on text-based reasoning, and SPORTU-video, focusing on video-based reasoning. The authors evaluate various LLMs on both components, revealing limitations in complex reasoning tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The reviewer is concerned about the lack of diversity and coverage of the dataset because of the limited prompt templates and number of samples. \n\n2. Implementation could be possibly flawed. \n- The error in Figure 6 looks suspicious and makes the reviewer wonder whether the model is called correctly or not. \n- The reasoning prompt asks the model to first generate answer and then reasoning, which is not optimal since the model's final answer cannot benefit from the reasoning process.\n- It is known that LLM usually prefers its own answer so it is important to understand G-eval' quality with different LLMs as the rater.\n\nMinor:\nL821 typos of \"Section ??\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "n/a" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- The video quality for the datasets\n- For each sports type, the video are biased to certain views or events?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- A multimodal new dataset for sports domain (with multiple sports) and well annotated by experts; the dataset should be helpful for the research communities\n\n- A well prompting capabilities to show the limitation of current LLM capabilities on the dataset. \n\n- Evaluating several reasonable public or private LLM models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper provides a multimodal dataset (text and slow-motion video) for evaluating (multimodal) LLM capabilities in the sports domain." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Yet another vertical dataset for LLM\n- It's helpful but marginal to expand the technical depth for the community\n- not clearly identified what current models failed." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. How do you split the SPORTU-text questions into rules-related, strategy-related, and scenario-related? What is your basis?\n\n2. What are the results on rule/strategy/scenario, respectively, on sportu-text?\n\n3. How is the error analysis in 5.1 conducted? Is there a definition for each error type?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. It proposes SPORTU-text and SPORTU-video to boost understanding more sports with rules understanding in text and video domains.\n2. It analyzes the views, reasoning prompts, sport types, the error types, which are comprehensive.\n3. The writing is clear." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In the AI+Sports area, existing works are limited to restricted kinds of sports, absence of explanations, or lack of reasoning on rules, and it proposes SPORTU consisting of SPORTU-text and SPORTU-video to boost understanding more sports with rules understanding. SPORTU-text evaluates models on rule comprehension and strategy understanding in the pure text domain and SPORTU-video evaluates models on understanding both video details and rules in the video domain.\nIt evaluates LLMs and MLLMs on SPORTU-text and SPORTU-video, revealing their limitations in complicated sports questions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. A benchmark aims to evaluate certain abilities and give some insights. The paper does not deeply discuss why the models have different performances and does not give advice on how to resolve the problem of understanding videos and reasoning on rules.\n\n2. Prompt strategy in LLM can also be tested on MLLM when evaluating on SPORTU video benchmark to see how the reasoning process influences MLLM.\n\n3. It's not very clear if the questions in this dataset can comprehensively detect the models' abilities to understand sports.\n\n4. The Pearson correlation between humans and the other metrics is low. Many are near 0." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to the weakness part." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. As a sport domain understanding benchmark, the proposed SPORTU combines text-based and video-based tasks to assess models' sports reasoning and knowledge application abilities.\n\n2. The evaluation setting is comprehensive including the direct prompting, chain-of-thought (CoT) prompting. In addition, few-shot promoting is also applied in SPORT-text evaluation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper prensents SPORTU, a comprehensive Sports Understanding Benchmark that integrates both text-based and video-based tasks to evaluate models’ sports reasoning and knowledge application capabilities. Based on this benchmark, this paper tests the capability of existing open-source or close-source models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Unclear motivation. The authors should clarify the differences between the proposed SPORTU and existing sport domain understanding benchmarks. Although discussions have been made in introduction and related work section together with Table 1, it is still unclear why the introduced features , for example, slow motion, multi-camera angles are important. More discussions and visualizations are needed.\n\n2. Missing details in dataset construction. There exist some unclear details in the dataset construction. For example, how to guarantee the multi-camera setting? Is it achieved simply by human annotator check? In addiation, the proposed SPORTU contains both the multi-choice and open-ended question, how are these two categories divided?\n\n3. More advanced evaluation methods should be applied. For example, ST-LLM [1], qwen-vl [2]\n\n4. The paper writing should be polished. Some references are missing, for example \"Section ??\" in Line 821. The quotation mark error in '”Why is it a foul in the video?”' in Linee 482.\n\n[1] ST-LLM: Large Language Models Are Effective Temporal Learners\n[2] Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024sportu,\ntitle={SportU: A Comprehensive Sports Understanding Benchmark for Multimodal Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x1yOHtFfDh},\nnote={under review}\n}" }, "abstract": { "value": "Multimodal Large Language Models (MLLMs) are advancing the ability to reason about complex sports scenarios by integrating textual and visual information. To comprehensively evaluate their capabilities, we introduce SPORTU, a benchmark designed to assess MLLMs across multi-level sports reasoning tasks. SPORTU comprises two key components: SPORTU-text, featuring 900 multiple-choice questions with human-annotated explanations for rule comprehension and strategy understanding. This component focuses on testing models' ability to reason about sports solely through question-answering (QA), without requiring visual inputs; SPORTU-video, consisting of 1,701 slow-motion video clips across 7 different sports and 12,048 QA pairs, designed to assess multi-level reasoning, from simple sports recognition to complex tasks like foul detection and rule application. We evaluate four prevalent LLMs mainly utilizing few-shot learning paradigms supplemented by chain-of-thought (CoT) prompting on the SPORTU-text part. We evaluate four LLMs using few-shot learning and chain-of-thought (CoT) prompting on SPORTU-text. GPT-4o achieves the highest accuracy of 71\\%, but still falls short of human-level performance, highlighting room for improvement in rule comprehension and reasoning. The evaluation for the SPORTU-video part includes 7 proprietary and 6 open-source MLLMs. Experiments show that models fall short on hard tasks that require deep reasoning and rule-based understanding. Claude-3.5-Sonnet performs the best with only 52.6\\% accuracy on the hard task, showing large room for improvement. We hope that SPORTU will serve as a critical step toward evaluating models' capabilities in sports understanding and reasoning. The dataset is available at \\url{https://anonymous.4open.science/r/ICLR_01-42D5/}" }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multimodal Large Language Models", "Sports Understanding", "Benchmark" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a59f8040eefb9787243f17a62b36fb2f0b8313e1.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "SportU: A Comprehensive Sports Understanding Benchmark for Multimodal Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x33vSZUg0A
Which Tasks Should Be Compressed Together? A Causal Discovery Approach for Efficient Multi-Task Representation Compression
main
Active
Video Coding for Machine;Image Compression;Multi-task Learning;Causal Discovery
applications to computer vision, audio, language, and other modalities
3;5;8
4;4;3
2;3;4
3;3;4
1;1;3
5.333333
3.666667
3
3.333333
1.666667
-0.917663
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Would it be possible to incorporate more details of the graph learning into the main paper?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. The problem has multiple applications and is very sensible. Image compression has direct implications, and it is sensible to think image compression in terms of its final use (object detection, classification, etc) as opposed to only pixel reconstruction.\n\n2. The proposed approach has a strong mathematical foundation and makes intuitive sense (conditional entropy should help resolve redundancy between certain tasks). \n\n3. Quantitative results are thorough and show promise." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors focus on learning methods for image compression. To this end, they propose a method that optimizes image compression for a number of different downstream visual tasks. They consider the potential redundancy of encodings for similar groups of visual tasks, and they utilize directed acyclic graphics to learn causal relationships between tasks. This approach leads to a better multi-task representation. They evaluate their approach on a number of different visual tasks and demonstrate convincing quantitative results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. I believe the description of the DAG learning should have more detail. Specifically, some of the details of the DAG based algorithm (described in the appendix as Algorithms 1 and 2) should be incorporated into the main paper. \n\n2. There are some typos and grammatical errors." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. Could you state the number of parameter sets used and explain how they are allocated across groups and individual tasks? The paper seems to suggest it's per group, but it also needs a set of task parameters per task for computing the pairwise gradient coherence.\n2. How is the forecast loss in Eq.3 used? \n3. Line 287, could you clarify if task order impacts the cost calculation and, if so, how do you address this potential variability in their method?\n4. Is there any constraint on the subsets used for set cover?\n4. Is it better or worse when the same task is not allowed to appear in different clusters? \n5. How is bitrate controlled in the proposed framework? How do you determine $\\lambda_i$ for each task?\n6. Is the causal discovery step required? Could the minimum description length principle be used for causal discovery and, thus, unifying step 2 and 3 as finding a DAG structure that minimizes the bit-rate?\n7. Could you provide a complexity analysis showing how computational requirements change as the number of tasks increases?\n8. A simple baseline would be multi-task learning naively combined with end-to-end compression, i.e. without the grouping, just use them auxiliary tasks with well-tuned weights. Why would you not include such a baseline? How would this baseline compare theoretically to the proposed method?\n9. Figure 3, could you add a legend for what the color shade represent? Could you define the \"Anchor\" baseline in the figure caption or main text?\n10. How does the method perform compare to more familiar baselines such as VQGAN?\n11. Github link is not provided?\n12. L137 typo \"Hu\"?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper discusses an important but less studied aspect of representation learning - how to leverage different supervised signals to learn a better representation, where \"better\" is defined as lower bitrate with higher downstream performance. This principle is sound. The experimental results demonstrate that the proposed framework achieves better performance comparing to end-to-end compression methods using the same bitrate." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a framework, Taskonomy-Aware Multiple Task Compression (TAMC), for lossy compression where the distortion is defined by multiple downstream tasks. First, TAMC groups tasks into clusters where tasks within a cluster are mutually supportive and a share representation is learned for each cluster. Then, it leverages causal discovery to identify dependencies between groups. This results in a directed acyclic graph (DAG), which can be used for further compression of the representation. Experiments on the Taskonomy dataset demonstrate that TAMC achieves superior bitrate reduction and task performance compared to baseline compression methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Clarity**: the description of the framework lacks clarity. Given that the paper proposes a fairly complex system, clarity is even more important for the readers to understand. For example, although there is a space constraint, the description of step 3 (taskonomy-based compression) is too brief. Other more specific questions are raised below.\n\n**Lack of ablations**: there are at least 2 ablations the authors could provide. 1. an end-to-end compression with multiple supervised tasks as auxiliary tasks; 2. single task groups, where instead of grouping tasks together in phase 1, simply treat each task as a group and carry out the rest of the learning." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- Although the paper states that tasks were grouped based on gradient coherence, the justification for task grouping is not clearly stated in LN 470. Could the task grouping process be further explained?\n- In Appendix A.5.2, the authors state that they select the parent node for a chosen child based on the node with the highest mutual information and claim that this is equivalent to the optimal choice for conditional entropy. While this claim seems correct, in the partial ordering of Algorithm 5, the child node is not explicitly defined, so the claim does not appear to be directly related to the implementation of the proposed method. It seems there might be an implicit assumption that the task with the larger entropy in its representation distribution is assigned the parent node. Would this be an accurate interpretation?\n- It is mentioned that the total bit-rate of task-wise representations was used as a regularization for model training, but there is not enough explanation regarding the bit-rate for each task in the trained models. It would be helpful to include information on the observed bit-rates for each task and to explain how these bit-rates may be related to the tasks.\n- Although theoretical approaches are mentioned in Appendix A.5.4 and A.5.5, they do not seem to be fully utilized or proven within the logic of the paper. Is there a plan to elaborate on these? In addition, could further explanations be provided in relation to the above Weaknesses and Questions?\n- (minor) Uniform upper and lower case conventions in subsection titles (especially in Section 5)\n- (minor) Typo in Equation 3 (\\theta_s -> \\theta_s^t)\n- (minor) Typos in Algorithm 1, 2 in Appendix (end for\"\\n\"return)" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Hierarchical structure of multiple tasks: The authors propose a methodology that learns a universal representation for computer vision tasks by utilizing techniques from multi-task learning and uses these representations to form a hierarchical structure of the tasks. I believe this methodology could aid research on general representations in areas beyond data compression.\n- Robustness of the trained representation: At lower bit-rates, the performance on downstream tasks consistently outperforms trained compression methods (such as MLIC++, ELIC).\n- Ablation with random graphs: The authors compared the performance of the proposed causal DAG structure, which exploits causal relationships, with that of a random graph structure to verify its effectiveness. This demonstrates the existence of inter-task relationships and shows that efficient learning and inference, which take these relationships into account, have a considerable impact on performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors propose a data compression technique using representations that can be applied to multi-tasking in computer vision. They propose a methodology for task-aware representation learning by using multi-task learning approaches and grouping similar tasks based on the alignment of gradient vectors during the training phase. To efficiently utilize the learned representations, they calculate the causal relationships between task groups using conditional entropy and construct a directed acyclic graph (DAG). The proposed method was validated using the Taskonomy dataset, and the baselines included traditional data compression methods (e.g., JPEG, WEBP) as well as some recent methods (e.g., ELIC, MLIC++, etc.). The authors confirmed the robustness of the proposed method at relatively lower bit-rates compared to the baselines across multiple tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Lack of clarity, not well-organized components: The figures in the paper are generally not well-organized, and the font size within the figures is excessively small, causing visibility issues. In particular, in Figure 2, the font size within the figure is less than half the line height. Even if it results in a reduction of the amount of information in each figure, it seems necessary to increase visibility by focusing on the most important information.\n- Relatively low data decoding accuracy: As mentioned by the authors, the results in subsection 4.3.2 show that the proposed method performs similarly or relatively worse than existing methods in terms of image compression. In order to show that this difference is not a significant problem for the application of the method, it seems necessary to qualitatively compare the decoded images, at least with some examples.\n- No results with time and space complexity: One of the critical aspects of a data compression algorithm is minimizing the time and space cost of its execution. Since this work focuses on data compression techniques using multi-task learning methods rather than representation learning methods, I believe it is necessary to analyze such costs to justify the applicability of the proposed method.\n- Lack of baselines for comparison: While I think it seems meaningful that the paper uses traditional data compression techniques (e.g., JPEG, WEBP) as baselines, there is still a lack of comparison with more recent trainable methods. Even if the compared baselines are considered state-of-the-art models, I believe that additional comparisons are necessary to clarify the consistent robustness of the proposed method compared to other approaches.\n- To summarize, the paper lacks completeness in explaining the methodology and convincing the readers through the results, and the experiments are not sufficiently thorough. Regarding the experiments, it may be helpful for the authors to refer to the analytical techniques used in the MLIC++ paper [1], which they have cited as a key reference. Through a comprehensive revision of the content and additional analyses, the paper’s clarity can be improved, and its novelty can be further emphasized.\n\n[1] Wei Jiang et al. MLIC++: Linear Complexity Multi-Reference Entropy Modeling for Learned Image Compression. https://arxiv.org/abs/2307.15421" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024which,\ntitle={Which Tasks Should Be Compressed Together? A Causal Discovery Approach for Efficient Multi-Task Representation Compression},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x33vSZUg0A},\nnote={under review}\n}" }, "abstract": { "value": "Traditional image compression methods often overlook the intricate interdependencies in multi-task learning, resulting in inefficiency and redundancy. In this paper, we propose a novel compression framework that leverages causal graph models to uncover conditional relationships between mutually beneficial task clusters. By constructing directed acyclic graphs (DAGs) based on conditional entropy, we capture the causal links among tasks, enabling progressive, context-aware compression. Parent representations act as hyperpriors for their dependents, reducing redundancy, enhancing scalability, and boosting compression efficiency. Extensive experiments across key computer vision tasks, including segmentation, depth zbuffer, and autoencoding demonstrate superior bitrate reduction and task performance. Our findings underscore the importance of disentangling task representations and modelling causal relationships for efficient multi-task compression, offering a new perspective on compact representation learning for advanced intelligent systems. Code will be available at: https://github.com." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Video Coding for Machine", "Image Compression", "Multi-task Learning", "Causal Discovery" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/23222ae8a10530aef73cfb4e6a91d4f9681f741a.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Which Tasks Should Be Compressed Together? A Causal Discovery Approach for Efficient Multi-Task Representation Compression" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x3F8oPxKV2
Zero-Shot Learning of Causal Models
main
Active
Causality;Transformers;Generative Models
causal reasoning
5;5;5;6
3;3;3;3
3;3;2;3
3;3;2;3
3;4;3;3
5.25
3
2.75
2.75
3.25
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Could the authors explain why Cond-FiP performs similar to some baselines in noise prediction and sample generation, especially when the node scale is the same or smaller than used in training? How is the FiP model implemented? In the original paper, it seems that the task is the recovery of the topological ordering. Is the FiP baseline here aware of the causal structure of datasets?\n- How does the alternating application of transformer blocks E work? Is this just an alternating optimization method where you optimize for samples when nodes are fixed and optimize for nodes when samples are fixed?\n- For zero-shot inference of the SCM distribution, what is the level of distribution shift that a new dataset can have for this method to be able to extrapolate well?\n- The main capabilities of the proposed framework are noise prediction and observational/interventional sample generation. However, individual counterfactual sample generation is also important in many applications. Can this framework enable counterfactual inference?\n- In the Adaptive Transformer Block, which iteratively updates the noise conditioned on the dataset embedding $z_{emb}$, can we interpret this as sort of a noise abduction typically performed in counterfactual inference?\n- How exactly does one perform interventions in the Cond-FiP? It would be beneficial if the authors elaborate on this mechanism in the paper. From my understanding, we just feed in an intervened SCM causal graph with the mutilations and use the corresponding dataset embedding for conditional generation. However, this is not made very clear in the paper." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- This paper is one of the first to consider generalizing the learning of functional mechanisms of structural causal models from arbitrary datasets and causal graphs and is a significant step toward building causally-aware foundation models.\n- The paper is written well with clear intuitions and explanations as to how it relates to similar work (e.g., FiP).\n- Although the assumptions are a bit strong (additive noise model, causal graphs known, noise variables known), the general idea of using an amortized procedure to approximate SCM distributions in a zero-shot manner is quite interesting.\n- The empirical results are convincing and show interesting observations, especially the performance of sample generation under distribution shifts and as the causal graphs scale up. It is certainly impressive that Cond-FiP can approximate the SCM distribution of 50 and 100 graph nodes quite well given that it was trained on only datasets with 20-node graph size." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Learning the causal generative process from observational data is a challenging problem bottlenecked by the necessity of learning a separate causal model for each dataset. This paper studies a unifying framework to enable zero-shot inference of causal generative processes of arbitrary datasets by training a single model. The authors adapt a recent advancement in causal generative modeling (FiP) to infer generative SCMs conditional on empirical dataset representations in a supervised setup, where the SCM is reformulated as a fixed-point problem. They propose an amortized procedure that takes in a dataset and its causal graph and learns a dataset representation. Then, the authors train a model conditioned on dataset embeddings to learn the functional mechanisms of the generative SCM. This framework enables both observational and interventional sample generation in a zero-shot manner. Empirical results show that the method performs competitively with baseline models for in-distribution and out-of-distribution settings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- In a synthetic data scenario, assuming access to the noise samples is a feasible assumption, but for real-world datasets, this will not typically hold. Using the noise samples as the supervision for the dataset embedding model may easily become unrealistic. The authors have an evaluation on a real-world benchmark (Sachs) in the appendix where they fit a Gaussian model. However, interventional sample results are not provided.\n- I believe the idea to directly infer the SCM distribution under the additive noise model assumption is interesting. However, again, the feasibility of this assumption may not always hold. It is true that we often parameterize causal models as linear/nonlinear additive noise models, but this can be violated in practice. It seems that this approach would only hold under the strict assumption of additive noise models.\n- Knowledge of the causal graph for several datasets can potentially be a strong assumption. In real-world datasets, the causal graph may be unknown and must be discovered. However, for the sake of this work, the synthetic scenarios are good for proof of concept." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. I have a hard time understanding the Generation with Cond-FiP paragraph. The \\mathcal{T} symbol seems to have a different meaning in the Projection paragraph than in the Generation with Cond-FiP. Is it a notational error? Please clarify. Also, what is the relation between eq. 5 and the equation in the Adaptive Transformer Block paragraph? They seem oddly similar yet, if I understand correctly, the \\mathcal{T} denotes the whole generation process described in the Adaptive Transformer Block paragraph.\n2. The Interventional Predictions explanation is unclear to me. Could you elaborate more on how you “modify in place the SCM obtained by Cond-FiP”?\n3. Could you provide more details on the generation of the datasets? What is the density of the graphs used in training and evaluation? What are the differences in the parametrization of functional relationships between P_in and P_out?\n4. What kind of regressor was used as a causal mechanism in DoWhy? What architecture was used in DECI?\n5. What is the role of DAG attention in dataset embedding? Can you elaborate on what would happen if you were to use standard dot product attention in the dataset encoding phase?\n6. It seems that the performance of baselines on RFF OUT dataset degrades significantly compared to RFF IN. Could you explain why?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The problem is well motivated by recent advances in the area, novel, and of interest to the community.\n2. The method’s description is detailed and well-structured." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a novel method and task of inferring functional relationships of SCMs from observations and their associated causal structures in a zero-shot manner. The problem is well motivated by the literature. The main technical contributions of the work are the definition of Cond-FIP (modification to the FIP architecture), and the construction of the dataset embeddings (used to condition FIP). The latter is done by training the transformer-based encoder to predict the evaluations of the functional relationships over the distribution of SCMs. The former is done using the FIP architecture that takes as input sample embeddings conditioned on the SCM and is trained with the MSE objective on samples from multiple SCMs.\n\nThe method is evaluated in the synthetic setup on varying graph structures, functional relationships, and noises. It performs comparably to the selected set of baselines on noise prediction, sample generation, and interventional sample generation tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Experimental evaluation seems limited. The authors claim that their approach matches the performance of SoTA, but they only compare against a small set of methods. It seems natural that authors would compare against cited work (Javaloy et al., 2023; Khemakhem et al., 2021) and architectures used in causal discovery literature [1, 2],\n2. In various fragments of the text the authors claim that “this is the first time that SCMs are inferred in a zero-shot manner from observations“. While it is clear to me that this work combined with previous literature technically allows SCM inference (graph and functional relationships), this ability is not demonstrated nor evaluated in this work. \n3. The method does not generalize well to larger datasets (as stated in Appendix C). This is an important limitation and should be stated explicitly in the main text.\n4. The evaluation would be hard to reproduce as the details of train and test dataset generation are not provided.\n5. Some parts of the method’s explanation are unclear to me. (see Questions)\n\n[1] Annadani at al., BayesDAG: Gradient-Based Posterior Inference for Causal Discovery, NeurIPS 2023\n[2] Brouillard et al., Differentiable Causal Discovery from Interventional Data, NeurIPS 2020" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1.The theoretical foundation of \"ZERO-SHOT LEARNING OF CAUSAL MODELS\" indeed needs to be improved, given its complexity and the challenges it presents in the field of machine learning. Because the cross domain SCM difference seems to be significant. I still feel puzzled why zero shot learning theoretically works. Is there problems such as incapability of the observation-based learning with conditional FiP?\n\n2.In the experimental section, I think more comparisons are needed. For example in DoWHY dataset, you may vary the node number and more generative mechanisms. Also, some ablations study of the important modules of FiP (e.g., encoder) should be presented since this seems to be an important factor for the final results.\n\n3.I still feel that the interventional and sample generation can be more comprehensive. Is there any demo example to show that how your method differs from others? I guess that which node you manipulate still has a huge impact on the result. Am I right or not?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.The idea of zero-shot learning with conditional FiP is interesting. \n\n2.The problem studied is important in the sense that zero-shot learners are commonly needed to finish the tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes to amortize the learning of a conditional version of FiP to infer directly the generative SCMs from observations and causal structures in a zero-shot manner." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.The theoretical justifications need to be improved.\n2. Some experiments are missing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No ethics concerns." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see weakness above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Zero-shot learning of a causal model is an important task, potentially due to the limitation of the training dataset when training the model. So I believe the task proposed by the paper is highly motivated. Additionally, the paper is written in a structured way." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method to learn a single model capable of inferring the causal generative processes of datasets in a zero-shot manner, rather than learning a specific model for each dataset. The general idea is to amortize the learning of a conditional version of the FiP architecture, which can infer the generative SCMs directly from observations and causal structures on synthetically generated datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. I would suggest author to either use a figure or introduce a bit more background of the idea of amortized causal learning in related works.\n\n2. In background on causal learning, I still don't know how to compute H. and what $Jac_x$ is. Also, $d_n$ and M are not defined.\n\n3. In training setting, I don't understand how to use noise samples. In equation (1), they seem to be residual terms but in Section 3.1, they are said to play the rule of the target variable.\n\n4. Do we need specific requirements on training data? Are they supposed to cover as many as domains?\n\n5. Why modeling noise will help zero-shot ability of SCM? I wish the paper gives more explicit explanations.\n\n6. In Table 2 and Table 3, it seems that sometimes Cond-FiP has the worse performance than FiP. This needs to be explained." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose an approach to amortize the learning of causal models, thus enabling zero shot inference." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024zeroshot,\ntitle={Zero-Shot Learning of Causal Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x3F8oPxKV2},\nnote={under review}\n}" }, "abstract": { "value": "With the increasing acquisition of datasets over time, we now have access to precise and varied descriptions of the world, capturing all sorts of phenomena.\nThese datasets can be seen as empirical observations of unknown causal generative processes, or Structural Causal Models (SCMs).\nRecovering these causal generative processes from observations poses formidable challenges, and often require to learn a specific generative model for each dataset.\nIn this work, we propose to learn a \\emph{single} model capable of inferring in a zero-shot manner the causal generative processes of datasets. \nRather than learning a specific SCM for each dataset, we enable FiP, the architecture proposed in~\\cite{scetbon2024fip}, to infer the generative SCMs conditionally on their empirical representations.\nMore specifically, we propose to amortize the learning of a conditional version of FiP to infer directly the generative SCMs from observations and causal structures on synthetically generated datasets.\nWe show that our model is capable of predicting in zero-shot the true generative SCMs, and as a by-product, of (i) generating new dataset samples, and (ii) inferring intervened ones.\nOur experiments demonstrate that our amortized procedure achieves performances on par with SoTA methods trained specifically for each dataset on both in and out-of-distribution problems. \nTo the best of our knowledge, this is the first time that SCMs are inferred in a zero-shot manner from observations, paving the way for a paradigmatic shift towards the assimilation of causal knowledge across datasets." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Causality", "Transformers", "Generative Models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/52bd709d10a2fe6cbcb0f53b1d4609f5e2fcd544.pdf" }, "presentation": null, "primary_area": { "value": "causal reasoning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Zero-Shot Learning of Causal Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x3cFAoorct
Learning Arbitrary Logical Formula as a Sparse Neural Network Module
main
Active
Neuro-Symbolic AI; System 2 intelligence; Deep Symbolic Learning (DSL); Equation Learner (EQL); differentiable Neural Logic Networks (dNL)
neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
3;3;3;5;8
5;3;3;2;3
1;2;1;2;3
2;2;3;3;3
1;2;1;1;4
4.4
3.2
1.8
2.6
1.8
-0.354167
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- Please summarily explain why concrete distributions help compared to $\\epsilon$-greedy?\n- How does your new trick compare to [1]?\n\n[1] Simple and Effective Transfer Learning for Neuro-Symbolic Integration. Daniele et al. 2024" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- The general idea of having membership functions for symbols is interesting, and is potentially an elegant abstraction of ideas from DSL. \n- The idea of learning rules from perception in general is very interesting, and is of significant importance in NeSy." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Summary:\nThe authors propose a Neuro-Symbolic(NeSy) method that can learn rules from perception in an end-to-end fashion. The paper aims to improve upon recent lines of work in this direction, and proposes new designs and symbol selection strategies, with different levels of noise and distributions. The paper also proposes a gradient shortcut strategy to improve the training and convergence. The paper also show experiments on synthetic MNIST based NeSy benchmarks --- widely used to test NeSy frameworks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Clarity: I think the paper is very unclear. This can be significantly improved by adding a running example, and potentially moving the comparison (3.1) and parts of section 4 to the Appendix. However, the general writing of related works and introduction is not quite clear. I would suggest the authors to arrive at the main message of the paper at earlier stage in the introduction.\n\n- Section 2.1 is well written, and well-motivated. However, 2.3 and 2.3 (in my understanding the main contributions of the paper) are not clearly presented. It is not clear why Concrete distributions help?\n\n- Figure 3's explanation is quite unclear and imprecise.\n\n- I am not sure, how novel the idea of gradient shortcuts is --- see questions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How complex can this approach effectively scale to? \n2. In the experimental section, the random seed is fixed at 42, which is unexplained." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Originality: 2.5/5\n\nCompared to the existing work DSL, this approach offers the ability to explicitly learn the logic equation. It introduces a Concrete distribution alongside the Godel t-(co)norm proposed in DSL, which adds noise to the soft symbolic values.\n\nQuality: 2/5\n\nPros: The context and mathematical formulations are well presented.\n\nCons: Scalability is a significant concern, as the number of neurons correlates with the number of internal states. While DSL has demonstrated scalability up to a 1000-digit sum, this work does not address its own scalability. How complex can this approach effectively scale? In the experimental section, the random seed is fixed at 42, which is unexplained.\n\nClarity: 2/5\n\nPros: The introduction provides a clear and convincing narrative.\n\nCons: The figures do not effectively support comprehension. For example, while Figure 1 illustrates the different outcomes of EQL and AFL, fully understanding it requires a detailed study of both paradigms. Figure 3 is even more challenging, as the components are not well explained in the caption, and there are many variants with minor changes. Figure 4 lacks sufficient details for understanding the experiments and data fully. \n\nSignificance: 2/5\n\nScalability is the primary limitation impacting the influence of this work on the neurosymbolic community." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposes a logic formula learner framework, which comprises of three components: LFL-Type1 learns arbitrary logical formula, LFL-Type2 learns a look-up table, and LFL-Type3 has combinatorial search freedom between them. This frame work is end-to-end differentiable, can converge in a single run, and can learn arbitrary logic formula with the symbolic module." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "See strengths above." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "It is not completely clear to me to what degree other approaches rely on the same amount of practical/engineering tricks to achieve proper convergence. I'm under the impression that e.g. ILP does not need this as heavily, but I also have to admit to not being an expert on logic formula learning." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Very well written, clearly presented.\n\nThe problem is interesting, as learning logical formulae opens many doors for NeSy learning.\n\nThe approach seems to be mostly original: The foundations (DSL and dNL) were clearly acknowledged and delineated from original contributions by the authors." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces the LFL (Logical Formula Learner), an architecture that entails and expands prior work, in particular DSL (Deep Symbolic Learning) and dNL (differentiable Logic Network) . Both prior works feature learnable logical representations. LFL proposes an architecture that includes both DSL and dNL, but within LFL the user can choose more or less representational freedom. This architecture also includes MLPs as loss shortcuts and several loss terms to control convergence.\n\nEvaluation is done on multi-digit MNist addition, MNist-addition and on 3-layer logical formula." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Figure 4(d) seems a bit dishonest. Apparently DSL's poor performance is down to solely training mode settings. This should be presented more clearly and transparently, e.g. by showing test performance.\n\nThe paper could use a comparison to some of the competing branches of related work, e.g. differentiable/neural ILP.\n\nVarious typos:\nTypo achieved / archived bottom of page 1.\nFig 3's caption: references to \"such as [9] and [16]\" should probably read \"such as Equations [9] and [16]\".\n\"In these networks we also constrain**t**\" -> \"In these networks we also constrain\"." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- The paper is missing several existing works on learning end-to-end learning: [1-5], of which [1-4] also learn proper logic rules. Furthermore, [4-6] are papers that also use an MLP loss / 'gradient shortcut' to (pre-)train the classifiers, which is critical to getting these systems to work. This is therefore not a new trick. \n- I understand the relation to EQL, and it's good to cite it, but outside of inspiration, these two methods are not that directly related? It seems like LFL is more like differentiable inductive logic programming in the sense of [7] than symbolic regression. \n- The footnote 2 is not correct: Scaling these weights does more than scaling the learning rate as it changes how close to 0.5 the average values of g(w_i) are. This means the derivatives of other parts of your system will change.\n- The 'LFL' framework is not clearly and formally defined. What are the constraints on the different choices of the functions?\n- The motivation for the linear reconstruction model is not clear to me. Is this purely for debugging? \n- The data in MNIST Addition is not binary, so why is a binary cross-entropy loss used? \n- What is the motivation for the label loss? \n- Figure 3 can be trimmed significantly: A single one of these 4 images should suffice by just denoting what is optional. \n- The architecture for multi-digit MNIST Addition in DSL (which I think is copied here) is highly specific to this task - How would this extend to different tasks?\n- An experiment on data different than MNIST would be nice - MNIST is easily clusterable, making the MLP shortcut a useful signal, but this isn't the case for all data. \n- The dataset setup for MNIST Addition is not correct and uses infinite data, while the canonical setup in the community is to take the training images, and randomly create a partition of them for different sums, giving finite data. The dataset is then about 60.000/2N in size - making the problem harder to train for larger multi-digit problems. This should be fixed.\n- The relevance of experiment 3.1.1 could be clearer\n- Experiment 3.2: I don't understand this result. There is nothing wrong with different training vs inference behaviour and this is standard in many DL layers. The label on Figure 4.d is unclear: This is _test_ accuracy under _training_ mode, right? \n\nMinor\n- The writing contains some typos. Eg: 052: Archived by - achieved by, 519: Conclution -> conclusion\n- Be careful with quotes '', they should be different when opening\n\n\n[1] Li, Zenan, et al. \"Neuro-symbolic learning yielding logical constraints.\" Advances in Neural Information Processing Systems 36 (2024).\n\n[2] Wang, Po-Wei, et al. \"Satnet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver.\" International Conference on Machine Learning. PMLR, 2019.\n\n[3] Cunnington, Daniel, et al. \"The role of foundation models in neuro-symbolic learning and reasoning.\" International Conference on Neural-Symbolic Learning and Reasoning. Cham: Springer Nature Switzerland, 2024.\n\n[4] Aspis, Yaniv, et al. \"Embed2Rule Scalable Neuro-Symbolic Learning via Latent Space Weak-Labelling.\" International Conference on Neural-Symbolic Learning and Reasoning. Cham: Springer Nature Switzerland, 2024.\n\n[5] Daniele, Alessandro, et al. \"Simple and Effective Transfer Learning for Neuro-Symbolic Integration.\" International Conference on Neural-Symbolic Learning and Reasoning. Cham: Springer Nature Switzerland, 2024.\n\n[6] Aspis, Yaniv, et al. \"Embed2Sym: scalable neuro-symbolic reasoning via clustered embeddings.\" Kern-Isberner G, Lakemeyer G, Meyer T, editors. Proceedings of the 19th International Conference on Principles of Knowledge Representation and Reasoning (KR2022); 2022 Jul 31-Aug 5; Haifa, Israel.[California]: IJCAI Organization; 2022. p. 421-31.. International Joint Conferences on Artificial Intelligence Organization, 2022.\n\n[7] Evans, Richard, and Edward Grefenstette. \"Learning explanatory rules from noisy data.\" Journal of Artificial Intelligence Research 61 (2018): 1-64." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The task of learning both rules and classifiers is quite challenging, and the addition of noise via the Concrete distribution could be a good solution to this." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors study three variants of differentiable inductive logic programming to both learn end-to-end neural classifiers and rules in a NeSy predictor setup. The architecture is based on a form of fuzzy operators with added stochasticity." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Currently, the paper is hard to follow, with a lot of abbrevations (it is difficult to remember what LFL-Type1-3 are!). The problem setup is not well-defined. I could infer it from my background, but this should be clear. I found it difficult to follow exactly how the 5 models (dNL, DSL and LFL-Type-i) fit in. The different LFL types are poorly motivated, and just are defined separately. Finally, the paper misses some critical related work, limiting the novelty of the paper. \n\nThe paper is limited in evaluation. It is only evaluated on a single task (MNIST Addition) and has a major problem in dataset setup which I will discuss below. Some different datasets, ideally also with different visual datasets than MNIST, would help with convincing that this model can properly learn." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. There is no formal definition of ϵ-greedy policy in line 171, page 4. \n2. There is no formal definition or citation for Gödel t-(co)norm in line 215 page 4. \n3. The input of the softmax function is a vector. However, the input of the softmax function defined in Equation (9) line 173 page 4 is a real number, which is hard to be understood by readers." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The authors conduct experiments with multiple settings to learn rules from a CNN model. The authors open their source code for the reference." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposed a general framework of network modules that explicitly equate a logical formula after the convergence of another neural network. The model is end-to-end differentiable, and training all modules from scratch achieving joint convergence in a single run, and explicitly learning arbitrary logical formulas (within limited complexity) with its symbolic module.\n\nHowever, I think this work is borderline rejected because of the following reasons:\n1. Some notations are not defined clearly and are used wrong. \n2. The structure of the manuscript is too messy to determine which part is the novelty and which is the preliminaries." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The structure of the paper is not clear to present which part is the novel contribution and which part is the preliminary work. For example, Sections 2.2.1 and 2.2.2 describe the Differentiable Neural Logic Network (dNL) Deep Symbolic Learning (DSL), these statements should be listed together in Preliminaries, not the contribution section. Furthermore, the terminology in Sections 2.2.1 and 2.2.2 should have citations. \n2. Some formats do not fit the standard such as ‘define in 6’ in line 156 and ‘16 is also’ in line 241. \n3. Some definitions are not very clear such as the ‘binary data’ in 103. \n4. The authors only compare performance with Deep Symbolic Learning (DSL) proposed by Daniele et al., 2022. I think there should be more benchmarks to prove the performance of the proposed methods experimentally." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose the Logical Formula Learner framework, a general framework of network modules that explicitly equate a logical formula after convergence." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning Arbitrary Logical Formula as a Sparse Neural Network Module},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x3cFAoorct},\nnote={under review}\n}" }, "abstract": { "value": "NeSy (Neuro-Symbolic) predictors are hybrid models composed of symbolic predictive models chained after neural networks. Most existing NeSy predictors require either given symbolic knowledge or iterative training. DSL (Deep Symbolic Learning) is the first NeSy predictor that supports fully end-to-end training from scratch, but it learns a look-up table rather than arbitrary programs or formulas. We propose the Logical Formula Learner framework, a general framework of network modules that explicitly equate a logical formula after convergence. We then propose 3 novel designs within the LFL framework with different levels of combinatorial search freedom: LFL-Type1 learns arbitrary logical formula, LFL-Type2 learns a look-up table, and LFL-Type3 has combinatorial search freedom between them. LFL-Type1 and LFL-Type2 show improvements over previous designs, and all three types can be wrapped into NeSy predictors. To our knowledge, LFL-Type1-based NeSy predictor is the first NeSy predictor that supports fully end-to-end training from scratch and explicitly learns arbitrary logical formulas." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Neuro-Symbolic AI; System 2 intelligence; Deep Symbolic Learning (DSL); Equation Learner (EQL); differentiable Neural Logic Networks (dNL)" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/0b59910fdeafd65685fbd5f16da7e0539851908f.pdf" }, "presentation": null, "primary_area": { "value": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/28416bbec966182b62f055df578b49d68848cb63.zip" }, "title": { "value": "Learning Arbitrary Logical Formula as a Sparse Neural Network Module" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x3jRzVAltZ
VR-Sampling: Accelerating Flow Generative Model Training with Variance Reduction Sampling
main
Active
Flow Generative Models;Training Acceleration;Diffusion Models
generative models
5;5;6;6
3;4;3;3
3;2;3;3
2;2;3;3
3;2;3;2
5.5
3.25
2.75
2.5
2.5
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "What is the location parameter m in the caption of table 3?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The authors propose an upper bound for the variance of loss gradient estimates during training of flow-based generative models.\n2. With the VR sampling strategy, the number of training iterations required to achieve similar performance is significantly reduced compared to the baseline." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors theoretically identify the high variance in loss gradient estimates at intermediate training timesteps in flow-based generative models, and this variance affects influences the convergence of the optimization process during training. To address this issue, they construct an upper bound for the average total gradient variance using a function related to the signal-to-noise ratio (SNR) and propose a Variance-Reduction Sampling (VR-sampling) strategy. This strategy prioritizes sampling timesteps from high-variance regions more frequently to improve training efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. There is no comparison with the state-of-the-art methods, and the results in Table 2 are significantly worse than the current state-of-the-art performance.\n2. The paper lacks a detailed introduction to the baseline used for comparison in the results tables.\n3. The paper does not provide an analysis of how the VR reduction strategy enhances the qualitative results.\n4. In Section 3.3, the sampling process in the proposed strategy is not clearly explained. The current description mentions calculating normalization, followed by the PDF and the inverse CDF, but it is unclear how the probability density function π(t) is derived. Is π(t) the result of the inverse CDF? Additionally, there is no information on the final distribution from which the samples are generated." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Based the discussion and method in the paper, during training, we will sample more high-variance regions for training. Will this cause overfitting on high-variance regions?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper provides the theoretical analysis for the training of the flow model.\n\n2. Based on their findings and theoretical analysis, they propose a simple but quite effective method to accelerate the training.\n\n3. I think this method can also be applied to other models easily and the conclusion will still be valid." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "1. This paper first analyzes the reason for training issues in Flow models and finds the important role of high-variance regions.\n\n2. Based on the theoretical analysis, this paper proposes a variance reduction sampling strategy to sample more timesteps with high variance to accelerate model convergence.\n\n3. The experimental results show that the acceleration is significant." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The complexity and overhead of Monte Carlo simulations. This work heavily depends on Monte Carlo simulation, which is complex and time-consuming. Although in the paper, the time is much faster than full training, it may still cause a large overhead. I think this paper should involve more analysis of several aspects that influence the speed and performance of simulation such as the higher number of samples and higher data dimension." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* Could the authors explain more detailedly about the VR-sampling strategy (line 315~323)? \n* Could the authors offer a figure like FIgure.5(a) in [1] to show the results on how the VR-sampling strategy improve the gradient variance?\n\n[1] Tero Karras, et al. Elucidating the design space of diffusion- based generative models." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* This papers provides theoretical analysis on gradient variance for conditional flow matching loss and proves the convergence rate during training.\n* The experimental results are clear and well verified the effectiveness of the VR-reduction sampling strategy." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a Variance Reduction-based diffusion timesteps sampling strategy to accelerate the training of DMs. The authors identify that the variance of gradient estimates in the training process is higher at intermediate timesteps, which affects training stability. They propose VR-sampling to prioritize sampling from high-variance regions, thereby accelerating training. The method is shown to significantly speed up training across various noise schedulers and data dimensions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper provides a theoretical analysis of the variance of the DSM loss and the convergence speed of DMs training in SGD. Based on this analysis, the paper proposes a method to accelerate training of DMs. While the experimental results appear promising, there is a lack of analysis explaining why the VR-Sampling strategy, specifically \"sampling timesteps in high-variance regions more frequently,\" leads to improved convergence speed.\n\n2. Building on point 1, it seems that the proposed method is anagolous to some empirical findings, such as the Log-normal and logit-normal techniques mentioned in [1] and [2]. The relationship between the theoretical insights and the proposed algorithm could be made more explicit.\n\n3. Section 4.2 is not entirely convincing. The results on the baseline models are not as strong as those reported in the original paper. For instance, in [3], the FID score (cfg=1.5) is 2.06 and 2.62 respectively on ImageNet 256 and ImageNet 512. In contrast, the best reported FID score for the baseline in this paper is 8.34, as shown in Table 2. While I understand that the experiments have controlled the number of training iterations, I believe a comparison of the convergence points is also necessary to provide a more comprehensive evaluation.\n\n4. On line 290, the Monte Carlo approximation is somewhat confusing. Are you using samples $x_1\\sim q(x_1)$ to compute both the outer expectation and inner expectation? Could the authors please provide a more detailed explanation to clarify this concept?\n\n**minor points** \n* Equation at line 290 is not labeled.\n\n[1] Tero Karras, et al. Elucidating the design space of diffusion- based generative models.\n\n[2] Patrick Esser, et al. Scaling rectified flow transformers for high-resolution image synthesis. In ICML 2024.\n\n[3] Nanye Ma, et al. Sit: Exploring flow and diffusion-based generative models with scalable interpolant transformers." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Is it right that the proposed VR-sampling is only performed in the training phase, not in the inference phase?\n- VR-sampling can support different choice of Noise Scheduler (Diffusion, Linear, Cosine), but it seems currently only tested on deterministic ODE formulation. Is it possible to extend it to SDE formulation (as a future work)?\n- Does Figure 2 show that S is highly correlated with the resolution, as both CIFAR and ImageNet 256 are using 64x64 resolution?\n- Could the author provide some insights how the cfg value can affect the results? I can see with higher cfg in Figure 3, 4, 5, the FID curve for VR-sampling can converge more similar to the baseline (Fig. 3). That said, what would you expect to happen if cfg is higher like 4.0, would the method still clearly outperform the baseline?\n- In line 324-331, the author mentioned VR-sampling takes extra time to find the probability density function (PDF) and construct $x_t$. To my understanding, this process requires evaluating the equation (missing equation number?) in line 290 and the term $p_t(x|x_1,i)$ using the neural network prediction. But I am not clear which state/checkpoint of the neural network is used to compute this term?\n- It is briefly mentioned \"Perception prioritized training of diffusion models\" use heuristics to sample time steps based on signal-to-noise ratio (SNR). But I am wondering how it compares to the proposed VR-sampling method?\n- The improvement seems more visible in Figure 13, than the improvement shown in Figure 14, 15. Does it give a hint that VR-sampling might work better with higher reoslution images or latents?\n\nSome minor comments:\n- I would suggest Figure 13-15 can be larger to better see more details and utilize the space.\n- In table 1 \"Ho et al., 2020b\" is the same as \"Ho et al., 2020a\". It is cited twice.\n- In table 6, how CFG scale is used in training? According to \"Classifier-free diffusion guidance\" [Ho and Salimans 2022], the training does not use the value directly, but with a random discard of the conditioning to train unconditionally." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- How to sample time steps is indeed an important problem in traininig diffusion models, the method is with a clear motivation.\n- The paper is well-written and easy to follow.\n- The results including FID curves and tables clearly and consistently show the effectiveness of the proposed method on different variants of flow models and model architectures, which are widely used in generative modeling recently.\n- The proposed method is also supported with necessary proof of bounds and convergence analysis.\n- It is appreciated that the paper includes an anonymous code link for reproducibility." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a Variance-Reduction Sampling (VR-sampling) strategy that samples the time steps in high-variance region more frequently to enhance training efficiency in diffusion models. The method first identifies the root cause of the high variance and proposes an effective solution for importance sampling. The proposed strategy is also supported with proof of bounds and convergence analysis. The results show that the proposed method outperforms various baselines, experimented on ImageNet and CIFAR datasets and different model architectures including U-Net, DiT and SiT." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper might lack some dicussions to related method in diffusion models. For example, in line 196, Iterative α-(de)Blending [1] also shares the same formulation as rectified flow. In Monte Carlo sampling, variance reduction can be achieved by using not just importance sampling, but also control variates and correlated sampling, which are initially explored [2,3] for diffusion models. Also, there exists similar line of work considering variance reduction in deep learning such as Deep Learning with Importance Sampling [4] (and there are more), which might also be worth mentioning.\n- The visual improvements of VR-sampling shown in Figure 14, 15 seem not significant to me. It will be great to have results on one of the Animal/Human faces (e.g., AFHQ, CelebA, FFHQ), or one of the LSUN datasets. As observed in Figure 13 and 14, the method seems to be working better on face images. Also, results on LSUN bedroom or church might be interesting as the images are with specific textures and patterns.\n\n\n[1] Iterative 𝛼-(de)Blending: a Minimalist Deterministic Diffusion Model, Heitz et al. 2023\n\n[2] Variance reduction of diffusion model's gradients with Taylor approximation-based control variate, Jeha et al. 2024\n\n[3] Blue noise for diffusion models, Huang et al. 2024\n\n[4] Not All Samples Are Created Equal: Deep Learning with Importance Sampling, Katharopoulos and Fleuret 2018" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024vrsampling,\ntitle={{VR}-Sampling: Accelerating Flow Generative Model Training with Variance Reduction Sampling},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x3jRzVAltZ},\nnote={under review}\n}" }, "abstract": { "value": "Recent advancements in text-to-image and text-to-video models, such as Stable Diffusion 3 (SD3), Flux and OpenSora, have adopted rectified flow over traditional diffusion models to enhance training and inference efficiency. SD3 notes increased difficulty in learning at intermediate timesteps but does not clarify the underlying cause. In this paper, we theoretically identify the root cause as a higher variance in the loss gradient estimates at these timesteps, which hinders training efficiency. Furthermore, this high-variance region is significantly influenced by the noise schedulers (i.e., how we add noises to clean images) and data (or latent space) dimensions. Building on this theoretical insight, we propose a Variance-Reduction Sampling (VR-sampling) strategy that samples the timesteps in high-variance region more frequently to enhance training efficiency in flow models. VR-sampling constructs sampling distributions based on Monte Carlo estimates of the loss gradient variance, allowing it to easily extend to different noise schedulers and data dimensions. Experiments demonstrate that VR sampling accelerates training by up to 33\\% on ImageNet 256 and 50\\% on ImageNet 512 datasets in rectified flow models. Furthermore, VR-sampling could simplify the hyperparameter tuning of logit-normal sampling introduced in SD3. \nThe code is available anonymously in~\\url{https://github.com/AnonymousProjects/VR_sampling.git}." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Flow Generative Models", "Training Acceleration", "Diffusion Models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d1e1398988d672b59bbdeb4eda67284fa0e9da4c.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "VR-Sampling: Accelerating Flow Generative Model Training with Variance Reduction Sampling" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x3l0fQubOn
Structural Quantile Normalization: a general, differentiable feature scaling technique balancing gaussian approximation and structural preservation
main
Active
feature scaling;preprocessing;normal distribution;differentiable transformation;quantile normalization;neural networks
optimization
1;3;3;3
5;5;4;3
1;2;3;3
1;1;2;2
2;2;3;3
2.5
4.25
2.25
1.5
2.5
-0.522233
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see weaknesses" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-written and easy to follow.\n2. The experiments in Table 1 show the efficacy of the method on the given dataset.\n3. Fast-SQN solves the computational efficiency issue by cleverly adding a spline interpolation layer." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a normalisation scheme called Structural Quantile Normalisation (SQN). Unlike many other methods, the proposed transformation is differentiable. The authors blend in the ideas of quantile normalisation, kernel density estimation, and PCHIP to propose their method. The method can be slow in its true form, but the authors provide a modification called Fast-SQN to counter it." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Table 1 shows better performance of the proposed method but lacks the error bounds, it might be helpful to see the fluctuation if different random seeds are employed for the same model.\n2. Authors cited that Gaussianisation techniques can mean losing intrinsic patterns in the data and are non-differentiable. However, I would argue that methods like Normalising Flows are differentiable, and a few such layers in the beginning might help with this. It would be interesting to see if NF-transformed data lags in performance compared to the methods reported in Table 1. \n3. Only one dataset is discussed, it is hard to say if the results might hold for any kind of data.\n4. Can this transformation also help with Image data? or use of KDE limits applicability to high dimensionality? \n5. Can using these transformation layers with methods that aim at transforming the distribution to standard normal can we reduce the computational complexity of those methods?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weakness" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The proposed methods are well-motivated and explained well. It covers existing methods as special cases and can be computed efficiently with the proposed FAST-SQN version." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposed a feature scaling method that transforms a feature such that the feature is close to normally distributed after the transformation. A kernel density estimation is fitted to model the pdf of the feature, then the inverse Gaussian cdf is applied to the cdf estimate of the feature. The kernel density estimation allows the method to keep the local structure of the original distribution. Experiments in one data set demonstrate that the proposed methods outperform other scaling methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Empirical validation is not enough\n\n1. Only one real data experiment is provided\n2. It is hard to tell if the performance improvements are significant, e.g. in table 1, model 3, Fast-SQN only improves RMSE by 0.001 over YJN, mean and standard deviation over multiple runs should be reported.\n3. The proposed method should be a general scaling method for any feature, but it seems only the target feature is normalized in the experiments (line 390 in the paper). I thought rescale input features would be more useful in model training.\n4. One advantage that is emphasized for the proposed method is its differentiability, but if it is only used in the experiment setting in the paper, differentiability doesn't seem to have any advantage.\n\nOverall, I think the empirical validation is far from convincing. The proposed methods should be able to apply to a wide variety of data. Much more experiments are needed to really demonstrate that it is superior to existing methods. If it can indeed improve model performance across a lot of data sets and especially if it can be applied similarly to batch-norm or layer-norm layer, this could be a major contribution to the community, but the current results don't support the claim." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- The proposed method seemingly only works in one dimension. In multi-dimensions, should one use Sklar's Theorem, if so, what do you do with Copulas? If not, how does this apply to general probability measures on $\\mathbb{R}^d$?\n\n- What is the regularity (continuous, uniformly continuous, etc... for some reasonable metrization of the weak topology on a reasonable subset of $\\mathcal{P}(\\mathbb{R})$) of this operation in general? Why should it be invertible in high-dimensions? \n\nThe most major question I have is: *Why use this method at all?* Specifically:\n\n- Is there any *provable* mathematical theoretical guarantee that it *must* improve downstream learning? \n\n- If you cannot provide a guarantee, then there must at least be convincing experimental evidence (what is offered certainly is not).\n\n- What is the take-way of Propostion 1, why is it important, and what is the concrete implication on the proposed method? \n\nEven if the above questions could be answered, I must ask: how does this work in high dimensions? This is not obvious to me, I'm assuming it is not to the authors either or the method would have been presented in greater generality." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The authors provide a reasonable literature review and the graphs are very elegant." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a new data-standardization method *applicable only to one dimensional data* which uses a specific procedure to monotonically perturb a KDE of a *one dimensional* applied to empirical measure, mapping it to a centered normal law. \n\nSome extremely limited numerics on very a small dataset (used in basic textbooks and old Kaggle competitions) is used to evalued at the normalization procedure." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are no real justification of support of the method whatsoever. \n\n- There is no theoretical support (there is a proposition in the appendix, which is formulated non-rigorously. What time of convergence is this? The KDE is a function of a random quantity, the empirical (random) measure associated with some one-dimensional law, then should I interpret this as $\\omega$-wise convergence? \n\nTherefore, this is no theoretical contribution.\n\n- The numerics are not convincing, only a toy dataset (the California Housing Market) dataset is considered, which is something on sees in introductory text books. Thus, there is also no experimental support for the proposed method.\n\n\n---\nBtw, the convergence in the proposition in the appendix is in the point-open topology, perhaps that should be mentioned (as the limit is difficult to interpret without reading the first line of the proof). I think this should be clearly said in the statement." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "With respect to weakness 2 above, I would suggest combining with either linear methods (which still have some value due to interpretability) or modern neural network approaches like TabPFN. Or perhaps the authors could evaluate their approach on neural network methods -- eg CSDI-T (Zheng & Charoenphakdee, TRL @ NeurIPS 2022) or TabDDPM (Kotelnikov et al, ICML 2023) -- for tabular data generation, which also require feature preprocessing." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Feature preprocessing is an important task, as even deep learning methods such as TabPFN require preprocessing to deal with highly-skewed features.\n\n2. Using PCHIP splines to achieve linear scalability while maintaining smoothness and monotonicity is an interesting proposal." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "SQN is a feature preprocessing method that interpolates between z-score scaling and quantile normalization via the Gaussian KDE, utilizing PCHIP splines to achieve linear scalability while maintaining smoothness and monotonicity. This method is evaluated on the California housing dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The experimental section is quite weak, only evaluating on the California housing dataset. The results would be much more convincing if the authors showed benefits on a benchmark of tabular datasets. And, while the authors claim that differentiability is valuable, but don't show that this provides any empirical benefits.\n\n2. The authors evaluate their method in combination with feedforward neural networks, which are not even close to SotA for tabular data. \n\n3. The originality of the proposal and the completeness of the related work section are seriously affected by the fact that the overall proposed approach is the same as KDIT (The Kernel Density Integral Transformation, McCarter, TMLR 2023). In both cases, one uses the KDE to smooth the pdf of a feature, then applies the inverse cdf of a reference distribution.\n\nGranted, there are some differences, but these are minor:\n\n(A) While the KDIT software package implements using the Gaussian as the reference output distribution, the KDIT paper only evaluates the uniform distribution as reference, thus interpolating between min-max scaling and quantile normalization, while SQN interpolates between z-score scaling and quantile normalization.\n\n(B) KDIT uses the polynomial-exponential kernel (Fast exact evaluation of univariate kernel sums, Hofmeyr, TPAMI 2019) to obtain linear complexity, while SQN uses PCHIP splines for this. PCHIP splines are superior in that they enforce not just monotonicity, but also smoothness; whereas the KDIT is only almost-everywhere smooth. It's not clear that everywhere smoothness is really valuable, given that the authors combine their approach with neural networks with not-everywhere-smooth ReLU activations." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce a differentiable feature scaling technique that balances Gaussian approximation and structural preservation, outperforming existing methods in multiple error metrics on real-world data." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024structural,\ntitle={Structural Quantile Normalization: a general, differentiable feature scaling technique balancing gaussian approximation and structural preservation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x3l0fQubOn},\nnote={under review}\n}" }, "abstract": { "value": "Feature scaling is an essential practice in modern machine learning, both as a preprocessing step and as an integral part of model architectures, such as batch and layer normalization in artificial neural networks. Its primary goal is to align feature scales, preventing larger-valued features from dominating model learning—especially in algorithms utilizing distance metrics, gradient-based optimization, and regularization. Additionally, many algorithms benefit from or require input data approximating a standard Gaussian distribution, establishing \"Gaussianization\" as an additional objective. Lastly, an ideal scaling method should be general, as in applicable to any input distribution, and differentiable to facilitate seamless integration into gradient-optimized models. Although differentiable and general, traditional linear methods, such as standardization and min-max scaling, cannot reshape distributions relative to scale and offset. On the other hand, existing nonlinear methods, although more effective at Gaussianizing data, either lack general applicability (e.g., power transformations) or introduce excessive distortions that can obscure intrinsic data patterns (e.g., quantile normalization). Present non-linear methods are also not differentiable. We introduce Structural Quantile Normalization (SQN), a general and differentiable scaling method, that enables balancing Gaussian approximation with structural preservation. We also introduce Fast-SQN; a more performance-efficient variant with the same properties. We show that SQN is a generalized augmentation of standardization and quantile normalization. Using the real-world \"California Housing\" dataset, we demonstrate that Fast-SQN outperforms state-of-the-art methods—including classical and ordered quantile normalization, and Box-Cox, and Yeo-Johnson transformations—across key metrics (i.e., RMSE, MAE, MdAE) when used for preprocessing.\nFinally, we show our approach transformation differentiability and compatibility with gradient-based optimization using the real-world \"Gas Turbine Emission\" dataset and propose a methodology for integration into deep networks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "feature scaling", "preprocessing", "normal distribution", "differentiable transformation", "quantile normalization", "neural networks" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/35f7066fa6d2282022689c85f8cd67eb9d852675.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/2f6348ab81083738649af1bc39c1483a76123269.zip" }, "title": { "value": "Structural Quantile Normalization: a general, differentiable feature scaling technique balancing gaussian approximation and structural preservation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x3lE88YkUl
Improving Resistance to Noisy Label Fitting by Reweighting Gradient in SAM
main
Active
label noise;sharpness-aware minimization;optimization
optimization
3;5;5;5;6
5;3;3;4;4
3;2;3;2;3
2;2;2;2;2
3;3;3;4;3
4.8
3.8
2.6
2
3.2
-0.600099
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "(1) Please explain how you consider the role of early stopping, and what regime does the analysis and SANER experiments consider. \n\n(2) Does SANER improve over SAM even accounting for early stopping, i.e. is the best early-stopped checkpoint of SANER better than best early-stopped checkpoint with SAM?\n\n(3) I am sorry if I missed this, but how do you set the SAM perturbation radius hyperparameter. How does it interact with \\alpha? Do the trends of gradients across different groups hold for different radii as well?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper builds up and contributes to the literature on understanding the benefits of SAM. The paper is generally well written and easy to read. Understanding SAM is broadly conceptually interesting, and the application to label noise is practically well motivated. \n\nIt builds up ideas in a simple fashion, and lays out clear hypotheses being tested and implications of these hypotheses. The study of which parameters have their gradients upweighted, down-weighted or reversed is novel to the best of my knowledge. The broader implications of this analysis are a little unclear (detailed below), but the analysis is clean and clearly presented. \n\nThe paper provides a new and simple modification to SAM based on the analysis and performs a large-scale evaluation over many datasets and architectures. While I have some reservations about the soundness of some aspects, overall this was a clear effort for a thorough investigation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper builds on the observation that SAM improves robustness to label noise over SGD. The paper analyzes which parameters' gradients are affected in different ways, between the SAM and SGD update. This provides some insight into the importance of further decreasing the norms of those parameters that have a lower norm under the SAM perturbation. Empirical results seem to show gains when performing this modification of SAM." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "One glaring gap in the analysis of the paper is the effect of early-stopping. Early-stopping seems to improve clean test accuracy in both SAM and SGD. The paper does not make it clear whether the goal is to compare SANER, SAM, and SGD at the final checkpoint or best early stopped checkpoint. Even with early stopping, SAM outperforms SGD. However, from Fig 1(c) it seems like SAM vs SANER shows no gains if we do early stopping. In all the experiments reported later, it is unclear which checkpoints are compared. \n\nFrom an analysis perspective, the authors acknowledge the weakness of focusing on noisy train accuracy (rather than final test accuracy): intuitively fitting noise at training is bad, but for different algos, this could manifest differnetly in terms of affecting test accuracy. Furthermore, clean training might also be affected which can affect test accuracy. I agree with the authors on this weakness. \n\nFrom a practical perspective, if the gains do not hold with early stopping, the benefit is less clear. Furthermore, SANER introduces a new hyperparameter \\alpha and adds complexity in practice. \n\nThe role of the perturbation radius of SAM is not discussed - in practice, this is another important parameter. How does this radius interact with the new hyperparameter \\alpha introduced? Is it strictly better to tune \\alpha rather than the radius, or should we tune both in parallel, or do they interact in ways such that we need to tune over both in combination. \n\nOverall, the analysis is potentially clean and interesting, but for reasons above, the soundness / comprehensive analysis is missing some key aspects. There is no clear generalizable insights from this work, and gains in practice (if they hold up under further experiments) come at the cost of additional hyperparameters." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the comments on weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. In the context of increasingly large training datasets, it is practically significant to effectively prevent models from overfitting on noisy label data. Relying solely on algorithms and rules to clean the data is often too costly, so optimizing algorithms to reduce the impact of noisy label data is more valuable.\n2. The author's approach of comparing the relative sizes of the gradients of each parameter in the SAM and SGD algorithms during model updates, and using this as a criterion to assess whether the parameters are overfitting to the model due to noisy label data, is straightforward and exhibits a certain level of innovation. The experimental design is also very reasonable.\n3. The author's experimental design is quite rigorous, and they conducted a substantial number of experiments and tests on several public datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The author studied the relationship between the gradients of the SAM and SGD algorithms, and through experimental validation, found a method more suitable for enhancing the model's resistance to overfitting on noisy label data. The algorithm modifies the weights of different parameter gradients during model updates to improve the model's robustness. In terms of experiments, the author conducted validation on several classic public datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.I noticed that Figure 2 in Section 3 is based on training a ResNet-18 model on CIFAR-10. Shouldn't testing be conducted on more models and datasets to confirm that the three types of parameters, Group A, B, and C, indeed exhibit the proportions mentioned in the paper?Perhaps conducting experiments on a larger ResNet model (ResNet-50) and a larger dataset like CIFAR-100 or ImageNet (if conditions permit) would provide more convincing results.\n\n2.Based on the author's analysis, in fact, at epoch 100, there was a noticeable increase in the parameter ratio of Group C, which maintained around 10% during the early training stages. Would it be more meaningful to directly use the reverse gradients for noisy label data? Also, I noticed that the ratios of Group B and Group C are quite similar in the early stages of training, and the values for Group A are mostly around 1.0 while those for Group B are around 0.99. If this is the case, I don't believe that SGD and SAM have different perspectives on the parameters in these two groups.As before, I believe it is necessary to validate this distribution pattern on larger models and datasets. Additionally, I think it would be meaningful to further investigate whether there are significant changes in the parameters within the three groups as training progresses, as well as the specific distribution of values." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the weaknesses above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper reveals an interesting phenomena, showing that there are dimensions with lower magnitudes and the same signs as the corresponding components in SGD, that are responsible for superior robustness of SAM. This is interesting and novel to my knowledge.\n\nThe paper is clear and easy to follow. While I found the observation interesting, I believe the paper requires to back up the results with a more in-depth study (either some theoretical analysis or more in-depth ablation study and a wider range of experiments) to be considered significant." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper conducts an empirical study which reveals that during training, some gradient dimensions contribute to the superior robustness of SAM over SGD against noisy labels. These dimensions have lower magnitudes and the same signs as the corresponding components in SGD. Reducing the weight of such dimensions during training improves robustness of SGD and SAM variants against label noise." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the proposed idea of reweighting dimensions with lower magnitudes and same signs as SGD is simple (simplicity is good), it requires an appropriate linear scheduler that gradually decreases the reweighting factor from 1 to a predetermined value over k epochs. Both the reweighting factor and k needs to be tuned and I didn't find an ablation study or any insight on how to set these parameters for different datasets/architectures. Table 5, 6 in the appendix only compares using or not using this scheduler (k=0 and k=50).\n\nConsidering that this is an empirical study, I'd expect much more in-depth empirical study of the observed phenomena. For example, what is the effect of datasets and architectures on the observed phenomena? What's the effect of the label noise level on the observed phenomena? How should one set the parameters of the proposed method? The paper includes experiments on Cifar10-100 (and their subsampled versions) with label noise level of 25% and 50%, and Webvision. On Cifar-10, the improvement is not significant, while on Cifar-100 the proposed method works much better. Nevertheless, there is no explanation or deeper analysis provided in the paper. Existing methods for robust training against noisy labels usually perform much better on Cifar10. Considering that the pattern is different for the proposed method, digging deeper into why things work differently would be an interesting addition to the paper. The authors can also analyze the effect of the proposed reweighting methods combined with existing robust training methods, as is done in the original SAM paper.\n\nFinally, I believe adding a theoretical study of the observed phenomena using a simple data model and network architecture, or even a simple toy example explaining the effect of reweighting would significantly strengthen the paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1)Regarding the division of gradient components into Group A, B, and C, what specific criteria or methodology did you use for this grouping? Could you provide pseudocode for this process to make the grouping mechanism clearer? Additionally, could you explain why these particular components were chosen for reweighting, and include systematic mathematical derivations to show how these components effectively suppress noisy label fitting?\n2)The paper includes comparisons with VaSSO; however, the results under different noise rates show considerable gaps. Could you explain why such significant differences occur under identical experimental settings? Moreover, could you conduct additional experiments covering more diverse noise ratios to provide a balanced comparison, and clarify whether these differences are due to inherent limitations of the methods or specific experimental setups?\n3)In the appendix, only ablation studies for the presence or absence of the α parameter were conducted, but there is a lack of detailed experimental analysis on the selection of α values (e.g., from 0 to 1 or values greater than 1). How do different values of α affect performance across varying noise ratios and network architectures? Providing such detailed experimental support would help in understanding the optimal choice of α under different scenarios." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1)SANER introduces a novel reweighting mechanism for specific gradient components, effectively extending SAM to better handle noisy labels. This innovation is well-motivated and provides a meaningful contribution to the field of noise-resistant optimization.\n2)The experimental validation convincingly demonstrates that SANER outperforms SAM and other optimizers in different noise scenarios. The visualizations provided further reinforce the claims regarding the noise robustness of SANER.\n3)The use of a linear schedule for adjusting parameters during training is practical and helps stabilize the learning process, especially in the early stages when noisy labels are more problematic." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents SANER, a novel approach designed to improve the robustness of Sharpness-Aware Minimization (SAM) in the presence of noisy labels. The proposed method introduces a reweighting mechanism for gradient components to suppress the fitting to noisy labels, thereby enhancing generalization performance. Extensive experiments are conducted on various datasets, demonstrating the effectiveness of SANER compared to SAM and other optimizers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1)The paper lacks a detailed explanation of how the gradient components are divided into Group A, B, and C. It is unclear what specific criteria were used for this division, and including pseudocode would enhance clarity and reproducibility. Additionally, while visual evidence supports the reweighting mechanism, systematic mathematical derivations justifying the effectiveness of the approach in suppressing noisy label fitting would be valuable.\n2)The manuscript includes comparisons with VaSSO, but there are notable gaps in performance under different noise rates, even with identical experimental settings. Clarifying why these significant differences occur would improve the discussion. Moreover, additional experiments with varied noise ratios would help provide a more balanced comparison. Furthermore, comparing SANER with methods such as AdaSam and Adan could better help demonstrate its broader generalization capabilities or effectiveness across different tasks and domains. AdaSam is primarily used in natural language processing tasks, while Adan is a recent adaptive Nesterov momentum algorithm showing promise in vision, language, and reinforcement learning tasks. \n3)The analysis of the sensitivity of the parameter α is limited. A more comprehensive study covering different noise ratios, network architectures, and training conditions would be beneficial. For instance, exploring α’s behavior in noisy segmentation tasks using Vision Transformers (e.g., ViT-based noisy segmentation) as well as standard classification tasks with different architectures (e.g., ResNet versus VGG) would provide a clearer picture of the parameter's impact across varied settings." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Regarding Figure 3, since Group A also contributes to SAM’s resistance to noisy fitting, why not reweight Group A as well?\n2. Since SAM enhances model performance in clean label scenarios, would SANER perform worse than SAM in such cases?\n3. Are there any improvements when combined with other label noise learning algorithms?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The writing quality is good.\n2. This manuscript provides a detailed experimental analysis of SAM and its robustness to label noise.\n3. The proposed method is tested across different model architectures, layer widths, and data sizes, demonstrating its generalization capability across different settings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This manuscript explores the potential of SAM in the context of noisy label learning. Specifically, it provides an empirical study of the component-wise gradients in SAM, identifying the gradient components most crucial for handling noisy labels. Building on these insights, this manuscript proposes SANER, a method that adjusts SAM gradients to improve model robustness against label noise." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The theoretical analysis is lacking. Specifically, there is no justification for why downweighting the gradient component in Group B helps the model resist label noise, or why gradients from noisy samples dominate Group B as the model begins to overfit. A theoretical analysis could suggest a more effective gradient adjustment method than simple downweighting.\n2. Some experimental results are missing.\n 1. Experimental results for high label noise settings (e.g., 80% symmetric label noise) are absent. Given that this manuscript primarily addresses label noise learning, evaluating the proposed method under high label noise conditions is essential, as it is a common practice in current label noise learning literature.\n 2. The clean accuracy results for different values of $\\alpha$ are missing. Recall that $p_\\text{clean}$ still accounts for 1 / 3 of the total components in Group B, so downweighting Group B may potentially harm the accuracy of clean samples." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024improving,\ntitle={Improving Resistance to Noisy Label Fitting by Reweighting Gradient in {SAM}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x3lE88YkUl},\nnote={under review}\n}" }, "abstract": { "value": "Noisy labels pose a substantial challenge in machine learning, often resulting in overfitting and poor generalization. Sharpness-Aware Minimization (SAM), as demonstrated in Foret et al. (2021), improves generalization over traditional Stochastic Gradient Descent (SGD) in classification tasks with noisy labels by implicitly slowing noisy learning. While SAM’s ability to generalize in noisy environments has been studied in several simplified settings, its full potential in more realistic training settings remains underexplored. In this work, we analyze SAM’s behavior at each iteration, identifying specific components of the gradient vector that contribute significantly to its robustness against noisy labels. Based on these insights, we propose SANER (Sharpness-Aware Noise-Explicit Reweighting), an effective variant that enhances SAM’s ability to manage noisy fitting rate. Our experiments on CIFAR-10, CIFAR-100, and Mini-WebVision demonstrate that SANER consistently outperforms SAM, achieving up to an 8% increase on CIFAR-100 with 50% label noise." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "label noise", "sharpness-aware minimization", "optimization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/cb4ded673972927aaabd709350bb6b7269d13f00.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/419eb160b3e6b48a4c20d11100a0abf5c3f350eb.zip" }, "title": { "value": "Improving Resistance to Noisy Label Fitting by Reweighting Gradient in SAM" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x418ZpazsR
Are Language Model Logits Calibrated?
main
Active
language modeling;calibration;model understanding
interpretability and explainable AI
3;5;5;5
4;5;3;4
2;4;3;3
2;2;1;2
3;4;3;4
4.5
4
3
1.75
3.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "* Could you provide more details about the datasets that you’re using? For example, how many datapoints are in each and how “diverse” are the different prompts\n* In 3.2 in the definition of PM(T), should it be lowercase pi?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The work points out an important shortcoming of language models, namely, an inability to accurately reflect probabilistic information given in a context. They further show how recent fine-tuning approaches affect this calibration\n* Many different models are evaluated, including the base and chat versions of several popular architectures, showing consistency and their observations and allowing for broader conclusions \n* The paper is well-written and the authors provide good motivation for the problem they’re exploring" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The study investigates the alignment of language models’ output probabilities with the numeric or probabilistic information in the contexts they’re given. They explore models that have undergone different fine-tuning techniques (instruction tuning, preference alignment) and see how this affects the model’s explicit reasoning in comparison to base models. They look at whether biases in token probabilities (e.g, a first-mentioned bias) can be identities. They find that across model architectures, language models are generally not well calibrated in this respect. Instruction-tuning seems to exacerbate the issue, often leading to mode-collapse. They also observe some interesting systematic biases for different model families/fine-tuning strategies. These findings highlight an important limitation of language models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The contributions of the work are rather limited. A very specific question is being asked and its unclear how relevant this question is to the broader community. While the work reveals an undesirable model behavior, it doesn’t propose methods for fixing these behaviors. There is a short discussion of potential reasons for the behaviors, but no empirical evidence for or against these hypotheses are given\n* Many of the results are missing standard errors or significance tests.\n* The enormous set of results in the appendix is difficult to navigate" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "> Title: Are Language Model Logits Calibrated\n\nThe word logits is not used in the paper at the moment and logits (the pre-softmax activations) are not analysed---only the probabilities output by LMs (post-softmax) are analysed. I thus believe the word \"logits\" doesn’t belong in the paper’s title. Further, the title—in my opinion—suggests the paper is about a more traditional notion of model calibration (e.g., Expected Calibration Error; ECE). Changing the title to highlight this different “view” of model calibration could be helpful. E.g., “Are Language Model Outputs Calibrated to Explicit Probabilistic Prompts” or something analogous.\n\n> Most Plots\n\nThe Figures in this paper are not readable when printed in black and white." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper investigates an interesting question: whether language models produce calibrated outputs when prompted with explicit probabilistic information.\n\nThe paper also proposes two related experiments to investigate model behaviour in this setting, analysing results in these two settings in detail." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper analyses how calibrated language model (LM) outputs are when prompted with explicit probabilistic information (e.g., prompt: “an urn has 5 blue and 3 red balls, I pick one, this ball is”, LM output: blue/red”).\nThey perform two main experiments:\n* “Distributions”. In this first experiment, they prompt the model with information about a uniform distribution (e.g., “Sampling from a uniform distribution with support from 2 inclusive to 5 exclusive, I sampled the number”) and evaluate the probability a LM places on tokens $\\{0, 1, 2, 3, …, 9\\}$.\n* “Probabilities”. In this second experiment, they prompt the model with an implicit distribution (e.g., “From 17 red marbles and 99 blue marbles, Billy reached blindly into the bag and grabbed a marble with the colour”) and see whether the model places calibrated probability on the tokens of interest (in this case, $\\{red, blue\\}$).\n\nIn the first experiment, the paper finds that instruction-tuned models are *less* calibrated than base models. It also finds that models have systematic preferences for some tokens (e.g., systematically preferring token 3 to others). In the second experiment, the paper finds that instruction-tuned models are *more* calibrated than base models (the opposite from the first experiment). \n\nThey also perform a human study, where they compare LM to human behaviour." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the paper performs two interesting experiments to investigate the role of explicit probabilistic calibration on language models, I believe that in its current state, the insights that can be drawn from it are somewhat limited. The paper performs a behavioural analysis on language models with two relatively simple and similar settings. I’d expect an ICLR paper to perform a more thorough analysis. Some suggestions below.\n\nI think the paper would be much stronger if it performed one of the following analysis (or both):\n* *Training data analysis.* While this may not be possible with the models currently analysed in the paper, the authors could extend their analysis to, e.g., Pythia or OlMO, and evaluate how the analysed LM behaviour relates to different statistics of the training data. The paper speculates about this in section 6, but this could be analysed.\n* *Mechanistic analysis.* The paper currently evaluates model behaviour purely as a black box. Performing mechanistic/causal analysis of how this behaviour relates to model internal activations could make the results more insightful. E.g., the paper could use distributed alignment search (Geiger et al. 2023) to find subspaces in the model which control the model’s behaviour on these tasks.\n\n\nGeiger et al. 2023. Finding Alignments Between Interpretable Causal Variables and Distributed Neural Representations" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "n/a" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- Well defined problem statement; the authors don't make any claims that they can't justify.\n- Thorough set of experiments to validate the existence of this problem, with very detailed methodology.\n- A human study that shows, interestingly, that humans are also not well-calibrated and that by some measures, the models are actually better calibrated than humans." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors find that LLM predictions are poorly calibrated for outputs that should ideally recover some aleatoric uncertainty, like the probability of drawing marbles of one color of out of a bag. Different models behave in different systematically uncalibrated ways, and sota models often allocate all of the probability to just one token." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- There seems to be a lot of relevant work on LLMs expressing uncertainty that could be cited here (e.g., see https://arxiv.org/abs/2401.06730 and the related work discussed therein).\n- There is no discussion of conformal prediction and how it might be used to address this problem, it it hasn't been done so already.\n- In summary, the paper identifies a problem that is already well known in some capacity---albeit maybe not in the particular formulation proposed here. Although the paper is well-written and the problem is well-articulated, no real solution is offered. Because of this, I lean towards 'borderline reject' for an ICLR paper, though I could seeing it faring better at an NLP or CL conference." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What would the result be like if you search over different temperature choices? \n\n2. Have you checked the actual text output of the models if they directly give the valid token in the next token position?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper is clearly written and easy to read. \n2. The idea of using context as the implicit likelihood for calibration is interesting and provides a good starting point for analysing the calibration level of language models.\n3. The human study is a good addition to show that both human and language models are biased, motivating the research for improving the calibration level of language models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on the calibration of language models. It defines calibration as the degree to which the output distribution probabilities of the candidate tokens are aligned to the likelihood inferred from the context. \nThe author evaluated the case of drawing marbles on open and close-source models and showed that LMs are poorly calibrated in this case. \nThe author also conducted further analysis and a human study to compare the calibration level of the model to humans." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The main and critical issue of this paper is that they didn't discuss the impact of the temperature of the softmax operation for getting the probabilities. The choice of temperature can greatly influence the entropy and final calibration level of the model, which will greatly affect the conclusion drawn from the paper. As shown in Table 1, changing temperature from 0.1 to 1.0 greatly changed the performance of the random baseline. \n\n2. The overall contribution of the paper is poor. The issue of low entropy of the Chat models and their token bias has been shown in different papers. The novelty of this paper is primarily on the definition of the calibration by using context information. \n\n3. The author only focuses on the next token position. It is also possible that the model may not give you the answer in the text token. As discussed in the paper, the PM can be low, meaning all valid tokens have low probabilities. If the model outputs 'of red' rather than 'red', probabilities of the second position should also be considered." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024are,\ntitle={Are Language Model Logits Calibrated?},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x418ZpazsR},\nnote={under review}\n}" }, "abstract": { "value": "Some information is factual (e.g., \"Paris is in France\"), whereas other information is probabilistic (e.g., \"the coin flip will be a [Heads/Tails].\"). We believe that good Language Models (LMs) should understand and reflect this nuance. Our work investigates this by testing if LMs' output probabilities are *calibrated* to their textual contexts. We define model \"calibration\" as the degree to which the output probabilities of candidate tokens are aligned with the relative likelihood that should be inferred from the given context. For example, if the context concerns two equally likely options (e.g., heads or tails for a fair coin), the output probabilities should reflect this. Likewise, context that concerns non-uniformly likely events (e.g., rolling a six with a die) should also be appropriately captured with proportionate output probabilities. We find that even in simple settings the best LMs (1) are poorly calibrated, and (2) have systematic biases (e.g., preferred colors and sensitivities to word orderings). For example, gpt-4o-mini often picks the first of two options presented in the prompt regardless of the options' implied likelihood, whereas Llama-3.1-8B picks the second. Our other consistent finding is mode-collapse: Instruction-tuned models often over-allocate probability mass on a single option. These systematic biases introduce non-intuitive model behavior, making models harder for users to understand." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "language modeling", "calibration", "model understanding" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/939f6159ab4ea076f1f53422adcd4a7a4146cd12.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Are Language Model Logits Calibrated?" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x45vUUY4nT
Sharper Bounds of Non-Convex Stochastic Gradient Descent with Momentum
main
Active
learning theory;nonconvex optimization;stochastic gradient descent
learning theory
3;5;5;6;6
3;3;2;3;3
1;2;3;3;3
2;2;2;3;3
1;3;2;3;3
5
2.8
2.4
2.4
2.4
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Can the authors present a clear comparison against vanilla SGD (without momentum) and how their results compare against existing results based on Algorithmic stability (for instance, Hardt et al 2016 that the paper cites)?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well written and the results appear to be fairly general and novel to my knowledge, though I am not an expert in recent developments in this area." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents high probability convergence rates and generalization bounds for SGD with momentum under a heavy tailed noise assumption for both general non-convex functions and under PL conditions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The empirical section is left somewhat underbaked - it will be worthwhile presenting additional results on more realistic tasks/datasets/neural networks (including transformer style architectures)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. What do you think are the key challenges in obtaining these theoretical guarantees for SGDM? Since there are some existing results as indicated in Table 1 that are worse than the results in the current paper, I'm curious about what are the key technical improvements over their analysis.\n\n2. By considering heavy-tailed gradient noise (large $\\theta$), what additional algorithmic insights can we obtain for the analysis? e.g. how do you compare SGDM with other first-order stochastic optimization methods?\n\n3. I'm not familar with the generalization literature; can you explain why you are considering $T=n/d$ in the general non-convex case and $T=n^2$ in the PL case?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The authors provide novel convergence results for SCDM, which is a popular algorithm in practice but is much harder to analyze than SGD.\n\n2. All the assumptions that this paper makes are followed by detailed discussions and comparisons to existing literature." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper derives high-probability convergence rates and generalization bounds for stochastic gradient descent with momentum (SGDM), a popular algorithm in practice. The bounds are first established for general non-convex functions, assuming sub-Weibull gradient noise. Then the authors move on to consider functions that satisfy the PL condition and derive improved bounds, with generalization bound independent of the dimension." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Although I'm not so familiar with related literature, it seems that there are many existing high-probability and generalization bounds established for various optimization methods. It is unclear what are the key differences (see also Questions)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Figures 1 and 2 exclusively present variations of SGDM. Including additional baselines would enhance the comparative analysis, providing insight into the performance of the proposed algorithm relative to established methods. These baselines could incorporate algorithms from previous literature on non-convex optimization." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "For general convex case, the authors prove that the convergence bounds of SGDM are sharper than that of related works and present the first generalization bounds of SGDM. With the additional Polyak-Łojasiewicz condition, the convergence bounds of SGDM achieve a faster $O(1/T)$ rate." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies Stochastic Gradient Descent with momentum (SGDM) and introduces theoretical convergence bounds and generalization bounds for SGDM. These bounds are tighter and faster than the theoretical results of related works in different settings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Missing baselines for comparison in numerical experiments. (details in questions)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- $\\tilde{O} (1/T^{1/2})$ is the same rate as Li & Orabona (2020)? What do you mean by “the convergence bounds are tighter than those of the related work”? (I’m aware that Li & Orabnoa assumes sub-gaussian noise).\n- Assumption 2.8 needs to define quantities like $j_t$. I’m assuming this assumption has to hold for all $t$.\n- Typo: line 292, “over SGD [1]”. What is ref [1]? Seems hard coded." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "I believe SGDM is still used (although not really with decaying step size that this paper focuses on), so the provided theory for heavy-tailed noise can be relevant. The authors seem honest with their results, for instance explicitly stating the the provided results do not improve over standard SGD." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper provides high-probability convergence and generalization bounds for stochastic gradient descent with (heavyball) momentum (SGDM). Sharper bounds under PL assumption and Bernstein condition are also provided." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The amount of improvement compared to related work seem very marginal. For instance, compared to Li & Orabona, a related work that this manuscript often refers to, the improvement is only from $\\log(T/\\delta)$ to $\\log(1/\\delta)$. I get that Theorem 3.1 also holds for heavy tailed noise ($\\theta > 1$), but in that case the authors admit the bound does not improve over standard SGD.\n- I am not an expert in generalization bound for stochastic gradient methods. Yet, a quick search shows [1], which is present in the references of this paper, but I could not find where it was cited. At least some comparison to [1] should be provided.\n- SGDM with decaying step size is quite outdated at this point, with cosine or more sophisticated sche dules, for instance [2]. As such, I’m not sure how useful these results are.\n- Experiments are shown for convex examples like logistic regression and Huber loss. What is the point of these experiments, when the entire paper is about non-convex bounds?\n- Writing should be vastly improved. Many paragraphs and sentences are dense and long, hurting readability quite a bit: e.g., line 210 to 215 is one sentence, Remark 3.2, entire experimental setup from 466 to 492 is one paragraph… etc).\n\n[1] Ramezani-Kebrya et al. (2024) “On the Generalization of Stochastic Gradient Descent with Momentum”\n\n[2] Defazio et al. (2024) \"The Road Less Scheduled”" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- See weakness for my main question.\n\n- Line 1043: could the authors explain why $\\xi_t$ is sub-Weilbull (or alternatively why we can apply Lemma B.4 on $\\xi_t$)?\n\n- eq. 1: miss brackets around $\\exp$" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- This paper provides a systematic analysis of high probability bounds of SGDM under various scenarios.\n- Regarding the technical aspect, this paper comes up with a uniformed analysis for different types of gradient noises, including sub-Gaussian, sub-Exponential, and heavy-tail noises, using the family of sub-Weibull distributions, which captures the degree of \"heavy-tailness\" by the parameter $\\theta$. This is also reflected in all results which naturally implies that heavier tails leads to worse bounds." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies high probability bound, including both convergence bound (corresponding to ERM loss) and generalization bound (corresponding to population loss) of SGDM in the non-convex and smooth regime. Besides the general non-convex case, this paper also analyzes the scenario under PL condition and shows accelerated bounds with this additional assumption. The results work under a relaxed assumption of stochastic gradientsa, where the noises of stochastic gradients are generalized to the family of sub-Weibull distributions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I don't agree with authors' claim that this paper is the first one studying the generalization bound of SGDM and that the two referred papers, Li & Orabona 2020 and Cutkosky & Mehta 2021, provide convergence bound (defined w.r.t. the ERM loss $\\nabla F_S$). I believe both referred papers actually provide generalization bound, e.g., see Li & Orabona Theorem 1 and Cutkosky & Mehta Theorem 2. Both theorems provide upper bounds on the generalization loss $\\nabla F$ instead of the ERM loss $\\nabla F_S$. Also I'd like to point out the difference in terms of data sampling: this paper considers sampling with replacement from a fixed dataset so that there could be repeated data, while the referred papers sample an i.i.d. data from the unknown population distribution in every iteration. This subtle difference is not mentioned in the paper. If my understanding is correct, the results of this paper is less novel than it claims.\n\nAs a related question, does Theorem 3.1 (and similarly Theorem 3.5) also hold for the generalization loss $\\nabla F$? It seems to me that if we slightly modify Assumption 2.4 and 2.8 by replacing $\\nabla F_S$ with $\\nabla F$, then the same analysis in the proof in C.1 still holds: smoothness (eq. 5) is not affected; the martingale difference concentration on page 20 and the sub-Weibull concentration on page 21 still hold with $\\nabla F_S$ replaced by $\\nabla F$. In other words, is it true that the same analysis (under slightly different assumptions) provides a generalization bound?\n\nAs a disclaimer, I'm not an expert in the field of learning theory, and I could be wrong with my understandings. I will be glad to reevaluate the results if the authors point out I'm wrong." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024sharper,\ntitle={Sharper Bounds of Non-Convex Stochastic Gradient Descent with Momentum},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x45vUUY4nT},\nnote={under review}\n}" }, "abstract": { "value": "Stochastic gradient descent with momentum (SGDM) has been widely used in machine learning. However, in non-convex domains, high probability learning bounds for SGDM are scarce. In this paper, we provide high probability convergence bounds and generalization bounds for SGDM. Firstly, we establish these bounds for the gradient norm in the general non-convex case. The derived convergence bounds are tighter than the theoretical results of related work, and to our best knowledge, the derived generalization bounds are the first ones for SGDM. Then, if the Polyak-{\\L}ojasiewicz condition is satisfied, we establish these bounds for the error of the function value, instead of the gradient norm. Moreover, the derived learning bounds have faster rates than the general non-convex case. Finally, we further provide sharper generalization bounds by considering a mild Bernstein condition on the gradient. In the case of low noise, their learning rates can reach $\\widetilde{\\mathcal{O}}(1/n^2)$, where $n$ is the sample size. Overall, we relatively systematically investigate the high probability learning bounds for non-convex SGDM." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "learning theory", "nonconvex optimization", "stochastic gradient descent" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/fb6c886a350cf84c7e046e98a2508262721a5393.pdf" }, "presentation": null, "primary_area": { "value": "learning theory" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Sharper Bounds of Non-Convex Stochastic Gradient Descent with Momentum" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x4W8P7ybTE
Aligning Multimodal Models for Clinical Reasoning using Rule-based Rewards
main
Active
Medical Vision-Language Models
applications to computer vision, audio, language, and other modalities
3;3;5;8
5;4;4;2
3;3;3;3
2;2;3;3
1;3;3;3
4.75
3.75
3
2.5
2.5
-0.924911
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* Is it possible to have small discriminative image-text models as the baselines (e.g., CLIP [1] and ALBEF [2])?\n\n[1] Radford, Alec, et al. \"Learning transferable visual models from natural language supervision.\" International conference on machine learning. PMLR, 2021.\n\n[2] Li, Junnan, et al. \"Align before fuse: Vision and language representation learning with momentum distillation.\" Advances in neural information processing systems 34 (2021): 9694-9705." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* **Methodology**: In general, it's a sound approach. By designing the heuristics, the approach enables a rule-based system to **produce** training datasets, **evaluate** the responses, and **post-train** the models using RLHF. Similar insights were also approved in [1].\n\n* **Writing**: The paper is well-written and easy to follow. The experimental designs can support the claims in this paper.\n\n[1] Mu, Tong, et al. \"Rule-Based Rewards for Language Model Safety.\" Advances in Neural Information Processing Systems (2024)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "* **Topic**: Medical VLMs, Rule-based Rewards, RLHF\n* **Summary**: This paper proposed to improve medical VLMs through a rule-based framework. It includes (i) using a rule to generate instruction tuning data, (ii) using the rule to evaluate the response, and (iii) using the rule-based rewards to perform RLHF to reduce hallucination. The experimental results show that the proposed approach achieves better performance than OpenFlamingo, LLaVA, and LLaVA-Med, etc." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* One of my biggest concerns about this paper is that it didn't show the potential of such an approach in medical **generative** VLMs. To be specific, the modeling can be replaced with a **discriminative** classification model since the responses from the MVLM are \"pseudo\" free-text responses, and they can be replaced with text templates + classification labels. So is a **large** generative VLM (7B parameters) really needed to solve the proposed task?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "What does line 136: \"followed by a rigorous evaluation to ensure no hallucinations are introduced within the template set\" mean specifically? What was the criteria used in this evaluation, or what were some examples hallucinations / examples of how hallucinations were checked in the templates." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Strong and sound method with clear performance improvements over many experiments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a new alignment algorithm for VLMs in clinical settings, addressing hallucination where models generate inaccurate or inconsistent responses. The authors propose a rule-based approach that grounds VLMs in clinical reasoning by creating large-scale visual instruction data and developing a reward function to ensure responses align with medical knowledge across multi-turn conversations. This avoids high costs of RLHF and they used it to create Dr-LLaVA." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Small things:\n- Figure 1 needs to be bigger. The conversation text cannot be read unless I zoom in alot.\n- Typo \"rule\": line 144 our conversational diagnostic system leverages ruke-based representations (Fig. 3) - should be \"rule-based\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In addition to the weaknesses detailed above, my additional questions are listed below: \n\n1. In Line 136, the authors state that they conduct “a rigorous evaluation to ensure that no hallucinations are introduced within the template set.” Can the authors provide additional details with respect to this evaluation and the numbers/types of hallucinations identified?\n\n2. To the best of my understanding, the single-turn QA performance values reported in Table 1 for question-level accuracy ($A_Q$) and conversation-level accuracy ($A_C$) should be equal, since there is only one question per conversation. Why are the values nearly 20 points apart for Dr-LLaVA?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper introduces an approach for improving the quality of textual outputs generated by vision-language models in the medical setting. The proposed approach intends to mimic clinical reasoning rules typically employed by physicians.\n2. The proposed approach is capable of understanding/generating multi-turn conversations in the medical setting, unlike many existing medical VLMs. The proposed approach can also handle diverse styles of interactions with clinicians. \n3. The authors demonstrate that their approach leads to performance improvements over several existing methods in this domain and conforms closely with clinical reasoning pathways.\n4. The paper is clearly written and is well organized." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Vision-language models often generate textual outputs that are not grounded in the input image-text information. The authors propose a new alignment algorithm that uses rule-based representations of clinical reasoning (obtained from a physician) to ground VLMs in medical knowledge. These representations are used to (i) generate visual instruction tuning data and (ii) derive a rule-based reward function that evaluates clinical validity of VLM responses. The resulting algorithm eliminates the need for human involvement in data construction or reward model construction. The algorithm is used to develop Dr-LLaVA, a conversational VLM fine-tuned for analyzing bone marrow pathology slides. Dr-LLaVA is shown to outperform several VLM baselines and can handle diverse styles of interactions with clinicians." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Limited Scalability:** The application area explored in this paper is narrow in scope (bone marrow pathology classification), and it seems that in order to expand to other medical applications, a lot of manual effort is necessary. In particular, (1) a physician will have to manually generate a set clinical reasoning rules in order to extend this approach to other medical application areas and (2) the rule-based reward model will need a new set of manually-constructed keywords (as in Table B.5). This weakness limits the widespread usability of this approach. \n\n2. **Additional Methodology Details:** Some important details with respect to the construction of the dataset and the rule-based rewards are missing, as detailed below:\n\n a. **Multi-Turn Conversations:** Each conversation in the synthesized conversational dataset consists of five question-answer pairs (Line 135). Why do all conversations include exactly five QA pairs, when it’s possible that the conversation could end in fewer turns if a leaf node of the decision tree is reached early (e.g. low quality image)? \n\n b. **Dataset details:** No details with respect to the composition of the synthesized dataset are provided. How large is the dataset? What is the class distribution with respect to the diagnoses? Are there sufficient samples for each of the possible decision pathways in the clinical reasoning decision tree?\n\n c. **Keyword Matching:** The authors use a manually pre-defined list of keywords in order to compute rewards and accuracy metrics. How accurate is the process of keyword matching, and how often do false positives / negatives arise?\n\n3. **Need for finer-grained evaluations:** The performance metrics in Table 1 are aggregate metrics. How well does the proposed approach perform in comparison to baselines across each diagnosis and each possible decision pathway?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "The paper is missing an ethics statement, which is essential in such a research involving patient information which is sensitive dataset." }, "flag_for_ethics_review": { "value": [ "Yes, Privacy, security and safety" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. How many whole slide images did you use to generate 16K datasets?\n2. Do you plan to release the model and the dataset?\n3. What is the source of the clinical reasoning pathways?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Rather than RLHF, the paper uses a hierarchy set of rules based on clinical reasoning pathways to train a medical VLM. This approach is a way around the lack of human annotations. \n2. Not only conventional conversational settings, but also diagnosis related and random order question sequence settings were tested and Dr-LLaVA performs better than the baseline models on all settings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper used rule-based rewards generated from clinical diagnostic pathways instead of human feedback based rewards for reinforcement learning to train a medical VLM, Dr-LLaVA. The rule-based rewards include consistency and correctness. 16,340 bone marrow pathology patches and hematopathologist's annotations were used to generate conversations for training. Dr-LLaVA performed better across all settings when compared to baseline models which are mostly further supervised fine-tuned with the same conversations but without the reward function." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. GPT-4 is used to generate the conversations, and a rigorous evaluation is done to ensure no hallucinations are introduced. However, the authors provide no details on the specific evaluation methods used to ensure this, which limits the reproducibility of the work. Additionally, this lack of transparency undermines the objective of reducing hallucinations with reinforcement learning, as the GPT-4-generated dataset could still contain incorrect statements.\n2. The essential details on clinical decision making pathways, dataset generation, and hallucination evaluations are either missing or located in the appendix rather than the main text, making it challenging to fully understand the methodology without the appendix. Since these are important in the work, the absence from the main text disrupts the flow and limits the study’s context and reproducibility. Including these details in the main body, or at least providing a summary with clear references to the appendix for further depth, would make the paper more accessible and cohesive.\n3. The presentation of the work needs improvement, especially in organization and clarity to be publishable quality. Currently, the layout requires readers to frequently go back and forth, as text descriptions are located far from the tables and figures that they describe. This disrupts the flow and makes it challenging to follow the paper. For instance, Figure 3 is introduced in Section 2 before Figure 2, which creates confusion in the narrative structure. Additionally, abbreviations are often used without prior definition; for example, Table 2 and Table 3 feature abbreviated column headings like \"CQ-R\" and \"Hcc,\" but their meanings are not clarified until much later in the text or, in some cases, are only defined in the Appendix. Including these definitions upfront or alongside the tables would enhance readability. Also, typos, such as \"ruke\" instead of \"rule\" on line 144 needs to be corrected.\n4. The paper lacks experiments specifically designed to address the issue of misalignment between image and text and to demonstrate how Dr-LLaVA mitigates this problem, despite this being a primary motivation for using RL over SFT. However, without empirical evidence or ablation study showing that RL successfully reduces misalignment, the paper's contribution is weakened.\n5. The paper lacks a comprehensive qualitative evaluation, presenting only a single qualitative example set in Figure 4. This limited approach does not provide enough insight into the model's practical application and performance across varied cases. Conducting a broader qualitative evaluation, ideally involving human assessments by clinicians, would allow for a more in-depth assessment of the model's reliability and relevance in real-world clinical settings. \n6. The paper lacks a section on limitations, which is essential for providing a balanced view of the work's contributions and areas for future improvement. \n7. The paper is missing an ethics statement, which is essential in such a research involving patient information. An ethics statement provides transparency about how ethical considerations were addressed, including data handling, privacy safeguards, and compliance with relevant regulations." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024aligning,\ntitle={Aligning Multimodal Models for Clinical Reasoning using Rule-based Rewards},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x4W8P7ybTE},\nnote={under review}\n}" }, "abstract": { "value": "Vision-Language Models (VLM) can support clinicians by analyzing medical images and engaging in natural language interactions to assist in diagnostic and treatment tasks. However, VLMs often exhibit \"hallucinatory\" behavior, generating textual outputs not grounded in contextual multimodal information. This challenge is particularly pronounced in the medical domain, where we do not only require VLM outputs to be accurate in single interactions but also to be consistent with clinical reasoning and diagnostic pathways throughout multi-turn conversations. For this purpose, we propose a new alignment algorithm that uses rule-based representations of clinical reasoning to ground VLMs in medical knowledge. These representations are utilized to (i) generate visual instruction tuning data at scale, simulating clinician-VLM conversations with demonstrations of clinical reasoning, and (ii) to derive a rule-based reward function that automatically evaluates the clinical validity of VLM responses throughout clinician-VLM interactions. Our algorithm eliminates the need for human involvement in training data generation or reward model construction, reducing costs compared to standard reinforcement learning with human feedback (RLHF). We apply our alignment algorithm to develop Dr-LLaVA, a conversational VLM finetuned for analyzing bone marrow pathology slides, demonstrating strong performance in single and multi-turn medical conversations." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Medical Vision-Language Models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d431d8d0f0285b86d674c0c695dc1c890aa7d8b1.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/3dfaa354b304aa771a23354e3accfcc47ae6dc8e.zip" }, "title": { "value": "Aligning Multimodal Models for Clinical Reasoning using Rule-based Rewards" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x4ZmQaumRg
Active Learning for Neural PDE Solvers
main
Active
Active Learning;Neural PDE Solvers;Scientific Machine Learning;Benchmark;Framework;Neural Operators
datasets and benchmarks
5;5;6;6
3;3;4;3
3;3;3;3
1;2;3;3
3;3;4;3
5.5
3.25
3
2.25
3.25
0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The parameter $c$ mentioned in line 96-97 (referred to as the field variables or channels) seems a bit ambiguous here as the following PDE doesn't contain anything about $c$. Would it be possible for the authors to provide more detailed explanation for the field variable/channel $c$ here? (This also highly relates to the parameter $N_c$ appearing in equations (3) and (5).)\n\nReferences:\n\n[1] Bruna, Joan, Benjamin Peherstorfer, and Eric Vanden-Eijnden. \"Neural Galerkin schemes with active learning for high-dimensional evolution equations.\" Journal of Computational Physics 496 (2024): 112588.\n\n[2] Gajjar, Aarshvi, Chinmay Hegde, and Christopher P. Musco. \"Provable active learning of neural networks for parametric PDEs.\" In The Symbiosis of Deep Learning and Differential Equations II. 2022.\n\n[3] Gao, Wenhan, and Chunmei Wang. \"Active learning based sampling for high-dimensional nonlinear partial differential equations.\" Journal of Computational Physics 475 (2023): 111848." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Extensive numerical experiments on multiple PDEs are provided to validate the effectiveness of the proposed methodology. Also, details about the numerical experiments, such as the neural network models and training procedures, are included for the sake of completeness." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper provided a bechmark called AL4PDE, which unifies active learning (AL) with neural PDE solvers. Specifically, it studies how several state-of-the-art neural surrogate model may be applied to solve parametric PDEs under a solver-in-the-loop (AL) setting. A complete set of numerical experiments on various tasks is included to justify the effectiveness of AL based methods compared to methods based on random sampling." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Though the authors have conducted a literature review on how AL has been used for solving other problems from scientific ML, such as PINN and direct prediction, it seems to the reviewer that the authors have missed a few important references like [1,2]. It might be meaningful for the authors to include these work and briefly discuss them in the introduction. \n\n2. Given that this work aims for a complete benchmark on various tasks, the authors might consider including some more experiments on high-dimensional PDEs, just like the setting of [3]." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the strengths and weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well-presented and easy to follow.\n\nThe proposed framework is novel in extending neural PDE methods with active learning methods.\n\nThe benchmark includes various batch selection strategies and neural PDE solvers, covering recent and classical works." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a benchmark framework for neural PDE solvers under active learning (AL) settings (AL4PDE). It provides a modular benchmark with various parametric PDEs and AL methods. The experimental results show that AL significantly reduces average and worst case errors compared to random sampling and yields reusable datasets across experiments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "One key benefit of AL is data efficiency, which is also stressed in the paper. It is important to show how much data reduction can be achieved with reasonable model performance.\nCurrent experiment section only shows performance comparison of different active learning methods and lacks the \"offline\" performance, which is training the model with full dataset and evaluate its performance. \n\nAt line $88$, the author claims \"We demonstrate that using AL can result in more accurate surrogate models trained in less time.\" As mentioned above, this claim is not supported with empirical evidence as the experimental section only compares active learning performance, which cannot demonstrate improvement in accuracies regarding offline performance.\n\nThe novelty of this framework seems limited, as it is a combination of existing AL and neural PDE methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Could you please add baselines form DoE?\n\nCould you link your benchmark code to existing open source code for uncertainty quantification (e.g. UQ 360)?\n\nCould you explain the tradeoffs of reducing the dimensionality of features vs. implementing translational invariance in terms of data efficiency of the feature-based AL?\n\nCould you add examples that do not use periodic boundary conditions?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Contributing a benchmark in active learning for PDE solvers fills a needed gap in computational infrastructure for PDE solvers that is key to the central challenge of data efficiency.\nThe article is pedagogical and presents clearly the capability of the benchmark." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Thie article introduces an active learning benchmark for neural PDE solver. It compare exploration-exploitation tradeoffs based uncertainty (epsitemic uncertainty of an ensemble of models with top-K and SBAL) or features (using dimensionality reduction using Gaussian sketching with Core-Set and LCMD). The authors then show a benchmark of these method on 1D and 2D parametric PDEs adding the baseline of sampling uniformly at random to represent the lack of active learning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The authors should help compare methods of Bayesian active learning and those of the field of design of experiments (DoE), which is missing in the literature review. For example, instead of using a baseline of uniform sample, Latin Hypercube sampling should be provided, as well as more sophisticated DoE methods. This benchmark effort is an opportunity to bridge these areas of research and communities that try to solve the same problem with a slightly different point of view and a different approach.\n\nThe benchmark should broaden the UQ methods by connecting to existing efforts (for example, the open-source UQ 360). Any UQ method that provide a confidence interval should suffice for active learning as the spread of the confidence interval can be a proxy of the uncertainty.\n\nIn the implementation details, the tradeoffs of the choice of taking the spatial average over the features to make make feature-based AL translation invariant are not discussed. It seems that the averaging creates a significant dimensionality reduction that may outweigh the benefits of a translational invariance in terms of data efficiency.\n\nAll the implementations use periodic boundary conditions, which significantly limits the scope of applications." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "No question" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The well-designed modular framework provides a solid foundation for further research on active learning in the context of PDEs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a modular active learning (AL) framework for training surrogate models of partial differential equation (PDE) solvers. It introduces a numerical solver for generating PDE samples, several surrogate models, batch selection strategies, and acquisition functions designed for active learning within this framework." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Although the framework is tailored for PDE problems, the implemented acquisition functions are orthogonal to PDE problems i.e. they are AL methods that are used in general domains. As a framework of AL for PDE, at least some PDE-specific AL methods such as adaptive sampling [1], also mentioned in Related Work, should be also implemented.\n1. The paper’s scope in terms of the surrogate models, acquisition functions, and types of PDEs studied is quite limited, impacting its practical applicability.\n\n[1] W. Gao and C. Wang, Active Learning Based Sampling For High-dimensional Nonlinear Partial Differential Equations, Journal of Computational Physics, Vol. 475, 2023." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A extensible benchmark to evaluate pool-based active learning for neural PDE solvers." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024active,\ntitle={Active Learning for Neural {PDE} Solvers},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x4ZmQaumRg},\nnote={under review}\n}" }, "abstract": { "value": "Solving partial differential equations (PDEs) is a fundamental problem in engineering and science. While neural PDE solvers can be more efficient than established numerical solvers, they often require large amounts of training data that is costly to obtain. Active Learning (AL) could help surrogate models reach the same accuracy with smaller training sets by querying classical solvers with more informative initial conditions and PDE parameters. While AL is more common in other domains, it has yet to be studied extensively for neural PDE solvers. To bridge this gap, we introduce AL4PDE, a modular and extensible active learning benchmark. It provides multiple parametric PDEs and state-of-the-art surrogate models for the solver-in-the-loop setting, enabling the evaluation of existing and the development of new AL methods for PDE solving. We use the benchmark to evaluate batch active learning algorithms such as uncertainty- and feature-based methods. We show that AL reduces the average error by up to 71\\% compared to random sampling and significantly reduces worst-case errors. Moreover, AL generates similar datasets across repeated runs, with consistent distributions over the PDE parameters and initial conditions. The acquired datasets are reusable, providing benefits for surrogate models not involved in the data generation." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Active Learning", "Neural PDE Solvers", "Scientific Machine Learning", "Benchmark", "Framework", "Neural Operators" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/3ba9c3f08a15e3c39295bfb2bd7993fd6e20419b.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/ec12a9b33858131c5e3251c9629a610ba2f08e58.pdf" }, "title": { "value": "Active Learning for Neural PDE Solvers" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x4jPW4p55i
Learning High-dimensional Gaussian Mixture Models via a Fourier Approach
main
Active
Gaussian Mixture Models(GMM);Parameter Estimation;Model Order Selection;Super-resolution;Line Spectral Estimation
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;3;3;3;6;6
3;4;4;3;3;4
2;2;2;2;3;4
2;4;2;2;3;4
2;1;2;2;3;4
4
3.5
2.5
2.833333
2.333333
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- What are the theoretical guarantees for recovering the means through the ``largest k maxima of $\\mathcal{J}$\" in the MUSIC algorithm? \n- What is L in algorithm 1 and section 2.2?\n- From eq. 17, each $f_d$ can in general be $\\Theta(1)$. As per the definition of f in eq. 13, doesn't this imply an exponential dependence on the dimension in the sample complexity given by eq. 15?\n- In Theorem 2, what is the requirement on n?\n- Why can't the algorithm in Vempala & Wang (2002) be further used to directly estimate the means of the mixtures? How does this compare to the MUSIC algorithm?\n\n-- Typos:\n- Line 388: expectated -> expected" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper does an adequate job at describing the problem setup and the proposed algorithm.\n- The proposed approach could have utilities in applications with known low-frequency bias in the fourier domain.\n- Numerical results suggest improved computational efficiency over the EM baseline with similar performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes an algorithm for estimating the means, the order (number of components), and the mixing distribution (the distribution over components) of a Gaussian mixture model through the measurements in the fourier space of the empirical measure of independent samples. Assuming the knowledge of the covariance, the work provides lower-bounds on the sample-complexity for estimating the order and mixing distribution. The paper further suggests applying PCA to improve the algorithm's computational complexity by estimating the low-dimensional subspace spanned by the means. Lastly, the paper numerically evaluates the proposed approach against EM (Expectation Maximization) baselines and criteria for estimation of the model order." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The theoretical analysis in the work is limited. The paper only provides lower bounds on the sample-complexity for estimating the model order and mixing distribution without a discussion of the sample-complexity for recovering the means themselves. The paper doesn't describe the conditions under which the mixture means can be estimated through the local minima of the MUSIC imaging function apart from lower-bounds on the necessary discretization. The PCA-based result is also directly adapted from Vempala & Wang 2002 and is not combined with the analysis for the main algorithm. Ideally, the sample-complexity guarantees should be compared with information-theoretic/minimax-optimal lower-bounds at a given scaling of separation distance and the covariance.\n- The proposed algorithm is an extension of Liu and Hai Zhang 2024 to multi-dimensional random variables and therefore possesses limited novelty. \n- It's unclear what assumptions on the fourier decomposition of the covariance are assumed for the proposed algorithm to be sample-complexity efficient. Even for the sample-complexity lower bounds for estimating the order, the dependence on the fourier decomposition of the covariance is hidden in the quantity $f$. It should be clarified how $f$ scales under a scaling of $\\Delta$ and $\\Sigma$ which allows recovery." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Please see above" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "The paper is well written. Both the problem and the techniques used in the paper are novel and relevant." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors tackle the problem of parameter estimation from a GMM with iid samples when even the number of components is unknown. To do so, the authors exploit Fourier measurements of the samples - a novel technique in itself and proposes an algorithm with linear time complexity. The authors further show the use of PCA in dimension reduction for improved computational guarantees." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I have some questions regarding this work\n\n1. I am worried that instead of the number of components being a hyperparameter, we now have the cutoff frequencies as a hyperparameter to the algorithm. Why is the latter better than the former? What is L in Algorithm 1?\n\n2. It is critical that the MUSIC algorithm is inserted into the main text. Otherwise the reader who is not acquainted with it is not able to understand it at all. \n\n3. The algorithm is very poorly written. The function \\Psi_n(t) is not referred to appropriately. In Step 2, what is \\mu? What is the quadratic program written in Step 4? The entire algorithm is a black box\n\n4. There is no intuition provided for the MUSIC algorithm. \n\n5. The lower bound in Remark 324-330 talks about the sample complexity when components is known - is that a contribution of this work? In any case, for K=5 and Delta=1/2, the dependence in lower bound is 2^{18}. On the other hand, the upper bound in Theorem 1 has dependence of 2^{16}. Why is the upper bound smaller than the lower bound? What am I missing?\n\n5) The fact that PCA reduced \"time\" complexity needs to highlighted appropriately - confusion with sample complexity in several places" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please look at the weakness section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper addresses the important problem of Gaussian mixture model estimation, with particular emphasis on determining the unknown number of mixture components. The authors propose a Fourier series-based method designed to simultaneously extract both the number of components $K$ and the model parameters $\\mu_i's$ and $w_i's$. Through extensive simulations, the authors demonstrate that their proposed algorithm performs comparably to or better than the traditional EM algorithm." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes two algorithms for estimating parameters in Multivariate Gaussian Mixture Models with a known common covariance matrix. The first algorithm leverages the MUSIC (Multiple Signal Classification) algorithm, along with a quadratic program, to simultaneously determine the model order $K$, estimate mean parameters, and estimate the mixing weights $w_i$. For small problem dimensions $d$, the authors propose gridding a bounded region in $R^d$ using $N^d$ data points to approximate an appropriate Fourier function, which is then used to determine the model order and parameters. For high-dimensional problems, the authors employ a Principal Component Analysis-based approach to first perform dimensionality reduction, and then apply the gridding technique (the first algorithm) to estimate the parameters." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper, while well motivated, lacks concrete theoretical guarantees. For instance: \n\n1. It is not clear how accurately the number of components $K$ of the model can be estimated by the MUSIC algorithm (even in small dimensions in presence of noise). More concretely, is the estimate (say K-hat) for the number of components a consistent estimator for the true number of components $K$? What happens to this estimate when the dimension of the problem is large.\n\n\n\n2. Are the estimates of weights and mean vectors are consistent? The authors suggest to run their algorithm with a larger number of components when the true number of components are unknown (page 14 lines 752-755). It is not clear what happens in that scenario. To best of my under standing, one selling point of the paper (compared to K-means or the EM algorithm) was that it detects the number of components accurately. It is not clear from the presentation of the paper, how justified is this claim. \n\n\n\n3. Finally, the paper recommends to grid a bounded region in the space $R^d$ using $N^d$ data points, and hence it is not efficient even when $d$ is small. On the contrary, the competing algorithms like K-means and EM-algorithm do not face such issues. \n\n\n\n4. The paper seems to be an extension of the MUSIC algorithm for one dimension [1]. It would be nice to know what are some technical challenges that are novel in this setting. \n\n[1]. Xinyu Liu and Hai Zhang. A fourier approach to the parameter estimation problem for one-\ndimensional gaussian mixture models. arXiv preprint arXiv:2404.12613, 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- Please see the weakness question above for questions regarding definitions and nomenclature?\n\n- Line 210: I believe the last term should have \\sigma_max(\\Sigma), rather than \\sigma_min(\\Sigma). Note that t^{T} \\Sigma T > ||t|^2 \\sigma_{\\min}(\\Sigma) and hence 1/(t^{T} \\Sigma T) < 1 /(||t|^2 \\sigma_{\\min}(\\Sigma)) and hence -1/(t^{T} \\Sigma T) > -1 /(||t|^2 \\sigma_{\\min}(\\Sigma)). This changes the result in line 214, so the dependence on n also changes to sigma_max. Would it change any other results in the paper? For example, would it change Equation 15?\n\nLine 318: In theorem 1, it is stated that \\Delta >= R_{D, K} holds with probability at least 1 - delta. From Definition 2, \\Delta seems to be a property of the underlying distribution and does not have anything to do with samples. So I am not sure what this theorem means." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper proposes a new algorithm for GMMs and characterizes its complexity. The results are novel and would be of interest to researchers in the field." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper considers the problem of estimating the means and the number of components of the GMM from samples. It proposes a new approach based on estimating the characteristic function (based on Fourier transform) and characterize the number of samples and time required by the algorithm to estimate the means and the order of the GMM." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My biggest concern about the paper is that the current writing assumes the reader is familiar with many technical terms which are not commonly used in machine learning or TCS conferences, for example terms such as 'spectral estimation algorithm', 'stable recovery', 'cutoff frequency', which are stated and used without being formally defined. While researchers in statistics may be familiar with these terms, the paper is difficult to follow for ML theory researchers. The paper also has some typos or mistakes (see the questions section), which makes it hard to follow. Here are few suggestions which might improve the flow (in no specific order):\n\nSome notations are used before they are defined, e.g.,\n- Line 136: D is not defined at this point.\n- Line 143: w_{min} and \\Delta are not defined yet.\nLine 142, 144: what is \"stable recovery\" and \"Stable estimation\"? Does it mean with high probability or something more subtle? It might be good to formally define it.\nLine 233, 249: what is L?\nLine 250: what is \\tilde{K}?\nLine 377: what is M?\n\nSome parts can use a reference\n- Line 192: why is the variance of asymptotic normality ( 1- |\\phi(t)|^2)? Please add a citation.\n- Line 246: why does Nyquist-Shannon sampling theorem imply that to estimate the mean, one needs h < \\pi/R?\n\nSome terms may not be easily understood by ICLR researchers\n- Line 194: why is multiplying by e^{t^T \\Sigma t} considered as modulating? While this is not critical to the story, there are many terms used like this, which may not be familiar with readers.\n\nI believe due to space constraints, the MUSIC algorithm is provided in the Appendix, while this is fine, it might be good to have some explanation of the algorithm in the main paper.\n\nLine 236: Algorithm 1: what is the difference between w and \\pi? Is there a reason to use \\hat{w}_i to denote an estimate of \\pi_i?\n\nLine 285: what is \\epsilon(t)? in Equation 13? Since this is a fixed quantity, by display equation in line 197, isn't y(t) = modulated characteristic function when epsilon(t) = 0? Also what does the assumption ||epsilon(t) ||_\\infty \\leq sigma mean?\n\nRemarks in line 320 and 324: The abstract currently suggests the provided bounds are the min-max rates for order estimation. However, after reading these two remarks, I realized that these are rates for a specific algorithm. I believe this confusion is not intentional, but would be good to clarify." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Do we know if the proposed algorithm can achieve the lower bound?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper provides sample complexity lower bounds." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper provides a lower bound on sample complexity for learning mixture of Gaussians.\n\nIt proposes a PCA based approach to solve learning mixture of Gaussians, which is comparable to EM." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I couldn't find theoretical guarantees for the proposed algorithm." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In the 'Primary Area' section, the authors\nchose to classify their paper in \n'semi-supervised, and supervised representation learning'. For the N iid input samples, \nis there any vector of known classes (i.e. ground truth)\nused in their estimation algorithm?\n\nCan the authors provide some applications \nof their algorithm in these two areas?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper studies theoretically the popular \nproblem of unsupervised learning of estimating the GMM \ndistribution of N input samples.\nIt states its goal clearly: it is about \nthe usage of Fourier operations, which lower the \ncomplexity of the learning algorithm.\nIt uses concise language, it presents its model and \nvalidates the complexity efficiency versus some variants\nthe EM algorithm.\nIt defers the detailed \ndiscussion of MUSIC for the appendix. \nThe above points make it well written\nand clearly presented.\n\nFrom a first study, the model seems mathematically \nsound and the suggested Fourier operations\nform a learning algorithm which seems to be novel.\n\nThe assumptions of the model are restrictive: the\nauthors acknowledge that in Lines 530 - 538." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors study the problem of using iid\nsamples to estimate the D-dimensional Gaussian Mixture \nModels (GMM)\nparameters using an algorithm with linear complexity,\nwhich challenges the current baselines (EM algorithm)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper chooses to analyze the complexity of the \nmodel and avoids allocating a discussion \nin practical applications of GMM estimation. That \nwould make it stronger as a machine learning study." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning High-dimensional Gaussian Mixture Models via a Fourier Approach},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x4jPW4p55i},\nnote={under review}\n}" }, "abstract": { "value": "In this paper, we address the challenge of learning high-dimensional Gaussian mixture models (GMMs), with a specific focus on estimating both the model order and the mixing distribution from i.i.d. samples. We propose a novel algorithm that achieves linear complexity relative to the sample size $n$, significantly improving computational efficiency. Unlike traditional methods, such as the method of moments or maximum likelihood estimation, our algorithm leverages Fourier measurements from the samples, facilitating simultaneous estimation of both the model order and the mixing distribution. The difficulty of the learning problem can be quantified by the minimum separation distance $\\Delta$ and minimal mixing weight $w_{\\min}$. For stable estimation, a sample size of $\\Omega\\left(\\frac{1}{w_{\\min}^2 \\Delta^{4K-4}}\\right)$ is required for the model order, while $\\Omega\\left(\\frac{1}{w_{\\min}^2 \\Delta^{4K-2}}\\right)$ is necessary for the mixing distribution. This highlights the distinct sample complexities for the two tasks. For $D$-dimensional mixture models, we propose a PCA-based approach to reduce the dimension, reducing the algorithm’s complexity to $O(nD^2)$, with potential further reductions through random projections. Numerical experiments demonstrate the efficiency and accuracy compared with the EM algorithm. In particular, we observe a clear phase transition in determining the model order, as our method outperforms traditional information criteria. Additionally, our framework is flexible and can be extended to learning mixtures of other distributions, such as Cauchy or exponential distributions." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Gaussian Mixture Models(GMM)", "Parameter Estimation", "Model Order Selection", "Super-resolution", "Line Spectral Estimation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e37fdaf563524d045779b67e0b3257ddd364e4c2.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/45691c41e4bf5806b152a1864f0ed0dc9af7fb33.zip" }, "title": { "value": "Learning High-dimensional Gaussian Mixture Models via a Fourier Approach" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x4lmFlfFKX
PolygoNet: Leveraging Simplified Polygonal Representation for Effective Shape Classification
main
Active
Shape Classification; Polygonal representation; Computational Efficiency; Self-Attention Mechanism;
learning on graphs and other geometries & topologies
1;3;3;3
3;4;4;3
2;1;1;2
1;1;2;2
2;2;3;3
2.5
3.5
1.5
1.5
2.5
0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The paper tackles the problem of single object classification in an image. This is a VERY populated and saturated field, with techniques improving two tenths of a point with niche tricks being published. More specifically, the paper tackles efficient computing for edge devices. Again, this is a field that started years ago with dozens of publications and even products already running on edge devices around the world.\nAll this work is completely ignore by the paper. Instead, the paper suggests a naive and hand tuned reduction approach, and compares to a vanila resnet-50. Even in this dated comparison, there is no clear merit for the proposed approach. \nThere are still many questions left unanswered unfortunately: \n* How would self-learned features behave for the same computational cost? \n* What other alternatives for compression are there? \n* How does one tackle more complex scenes? \n* How does the method fair against shuffle net, mobile net and their many followups?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is concise, and explains the idea in simple words\nThere might be some interesting insight in using a very concise representation" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a method for efficient image classification. The main insight is distilling images to two slim representations - either a contour or dominant points of this contour - and use it for classification, inducing minimal computational costs both during inference and training." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Unmotivated problem\nNon noval solution\nLimited applicability\nPoor results\nInadequate evaluation" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "+ Consider updating the paper with the recent state-of-the-art models (refer to https://paperswithcode.com/sota/image-classification-on-fashion-mnist) results on fashionmnist dataset.\n+ Please consider using larger-sized datasets for quantitative results." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "+ The proposed method achieved \n1) multiple outcomes (listed in the summary), making it unique.\n2) results comparable to SOTA methods on multiple datasets (FashionMNIST, Flavia, and Folio).\n+ The proposed method achieves impressive inference time (both on the server and Jetson Orin configurations, as shown in Figure 4) and total time on both devices (workstation and edge computing, as shown in Table 2).\n+ The paper is well-written and easy to understand/follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes an approach to leverage efficient polygonal representations of input images. Polygonal representation offers a concise and flexible depiction of images. The proposed method transforms input images into polygonal forms using either dominant points or coordinates of contours. The polygonal representation 1) substantially reduces the computational burden associated with processing large image datasets, 2) accelerates the training process, 3) conserves computational resources, 4) facilitates improved generalization of the trained models, and 5) is suitable for real-time applications and resource-constrained environments. The polygon forms are used to train deep neural networks. The proposed method achieves comparable performance to SOTA methods, mitigate overfitting, and produce lightweight models suitable for edge computing." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper is limited by qualitative results (restricted to one or two samples). It is unclear if there are complex cases and how the model performs.\n- Experiments are limited to small-sized datasets.\n- Resnet-50 is a slightly old method and is the only method used to compare with PolygoNet (Contours and DP, as shown in Table 1). It appears more like a technical report than a comprehensive comparison with recent state-of-the-art methods in the problem domain.\n- The margin of improvement in the results due to the proposed method is negligible (Table 1) on all three datasets. Though F1 Scores are comparable, the accuracy values are lower.\n- The proposed method is limited to extracting geometric/shape features from 2D images while ignoring color or other 2D feature details.\n- The paper does not provide an ablation study to understand the contributions of individual components within the proposed method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The proposed architecture uses self-attention, which can potentially increase computation times over simpler architectures. Did the authors experiment with simpler architectures? If so, the results could be interesting to include.\nDid the authors experiment with PointNet-like architectures, since the input is composed of contour/dominant points?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- The paper is well written. In particular the first 5 pages are very well written, clear and easy to follow, including also an overview of the MATC algorithm\n- The idea is definitely interesting: using only contours (or, more generally, keypoints) to do classification can lower the computational requirements, which is important for edge devices, and it is also going in the direction of human-inspired techniques compared to processing every input pixel using CNNs or ViTs\n- The proposed algorithm is effective in significantly reducing the runtime and number of operations compared to the ResNet baseline" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a pipeline to classify input images with lower computational requirements compared to full-scale CNNs, potentially suitable for edge devices. The pipeline consists in running classical contour-extraction algorithms, optionally applying the MATC approach to extract dominant points from the contours, and then feeding the extracted points (contour or dominant) to an ad-hoc classification network architecture composed of self-attention with positional encoding and 1D convolutions, to leverage global and local context. Experiments are carried out on FashionMNIST, Flavia and Folio datasets, which include objects on regular backgrounds, comparing the results of the proposed method to a ResNet baseline. The proposed method uses significantly less FLOPs compared to the baseline, which translates to significantly lower inference times. The approach using dominant points uses marginally less operations compared to using the full contours, but it requires significantly longer overall time, albeit still less than the baseline, due to the MATC algorithm. Both proposed approaches achieve F1 scores comparable with the baselines, but reduced accuracy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Even though the paper is generally well written, the first half of the paper includes repetitions when describing the proposed pipeline. The network architecture description at page 6, instead, could benefit from a more descriptive figure.\n- A good part of the paper is used to describe the MATC-based approach, only for it to be shown as slightly worse than the contour-based approach and with significantly higher runtime. Can the authors elaborate on the importance of this variant of the proposed pipeline? For example if they see use cases in which it can be preferred over the contour-based variant. Otherwise, the paper could also be reformatted to more effectively target the question \"how much information is needed for classification\", to which the answer could be that contours already provide a good amount of information, but dominant points carry less information and require more execution time. If targeting this question, additional experiments would be required (such as including internal contours/edges, colours, etc).\n- The paper claims \"enabling our pipeline to demonstrate consistent performance across diverse and challenging conditions\", however the experiments are carried out on datasets with very simple conditions, such as regular backgrounds, which can significantly simplify the task of contour extraction, key to the proposed pipeline. Either more challenging datasets should be employed, or the claim should be relaxed, and also clarified in the introduction that the pipeline is only suitable to images which can be easily binarized.\n- The experiments only include one baseline: ResNet. A milestone of vision architectures, ResNet is certainly important, but a bit outdated compared to more recent architectures. ViTs should be included (and they usually require more computation, as the authors also note, which could benefit the comparison to the proposed approach), as well as other architectures targeted to low-compute devices, such as YOLO. Without these baselines, it is difficult to assess the validity of the method, even in controlled conditions such as the ones offered by the chosen datasets." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The proposed method is limited (see 2,3,4,5 above) and underestimates the task and system (see 1, 2, 3 above). Please clarify if I have a misunderstanding in the weakness section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1 Fast inference speed." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The work proposes a contour and dominant point-based image classification model. By only considering the contour and dominant point, the work can speed up the training and inference speed of the neural network." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1 In lines 68-72, the work mentions, 'This methodology aligns with cognitive processes observed in human visual perception, where recognition is often based on key structural features rather than exhaustive pixel-by-pixel analysis Biederman (1987); Koffka (2013).' This motivation/insight oversimplifies the image recognition task and human perception. Human perception and cognition systems are much more complex than this. Biederman (1987)'s work addresses the basic level of the cognition process. Human cognition operates on multiple levels and involves lateralization, where both hemispheres of the brain contribute to different aspects of recognition—such as broad categorization and fine-grained identification. This multiple-level, lateralized system integrates structural, contextual, and surface information, which goes far beyond the simplified structural approach suggested in the work. I encourage the authors to take a look at this literature review [2].\n\nBy the way, there is an interesting game called 'Who's That Pokémon?' In this game, the player needs to guess the name of the Pokémon based on its contour, which is more challenging than using pixel-by-pixel information.\n\n2 In lines 23-25 and lines 65-67, the work mentions that the proposed method can filter out the background noise. This is an interesting direction, as spurious correlations [1] often arise from the background. However, the proposed method relies on a thresholding mechanism to filter out irrelevant content. I do not believe this approach is robust, especially in scenes with diverse backgrounds. The qualitative results are all based on datasets with simple backgrounds. Using the segmentation model is one way as mentioned in the conclusion, but the work does not consider integrating the model, limiting the work.\n\n3 Only considering contour and dominant points is risky and underestimates the complexity of image recognition/classification tasks. Many image classification tasks depend on more than just contour recognition. For example, how can the proposed method distinguish different kinds of balls, such as basketballs, soccer balls, ping-pong, volleyballs, and baseballs, without knowing the texture, color patterns, or surface details? How can the proposed method distinguish the brand of vehicles? The Flavia dataset used to demonstrate recognition is favorable to the approach, as the shape and contour of the leaves are different across categories. If the proposed method wants to prove the classification ability, considering the examples I mentioned is more convincing.\n\n4 The work emphasizes inference speed and tests it on edge devices, which is a highly relevant real-world consideration. However, the work does not provide a real-world demo; instead, it is tested on an existing dataset offline. Deploying this in a real-world scenario is a challenge that is not addressed. For instance, real-world scenes often have complex backgrounds, include multiple objects, and feature objects in various poses. None of these real-world considerations are taken into account in the work.\n\n5 The work may not work well for non-rigid objects and objects with various poses. Although the datasets contain non-rigid objects such as clothes, these objects are well-posed in the dataset.\n\n6 The baseline is just ResNet50. Only adopting one baseline is not convincing.\n\n[1] Kim, Younghyun, et al. \"Bias-to-text: Debiasing unknown visual biases through language interpretation.\" arXiv preprint arXiv:2301.11104 (2023).\n\n[2] Palmeri, Thomas J., and Isabel Gauthier. \"Visual object understanding.\" Nature Reviews Neuroscience 5.4 (2004): 291-303." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024polygonet,\ntitle={PolygoNet: Leveraging Simplified Polygonal Representation for Effective Shape Classification},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x4lmFlfFKX},\nnote={under review}\n}" }, "abstract": { "value": "Deep learning models have achieved significant success in various image-related tasks. However, they often encounter challenges related to computational complexity and overfitting. In this paper, we propose an approach that leverages efficient polygonal representations of input images by utilizing either dominant points or coordinates of contours. Our method transforms input images into polygonal forms using one of these techniques, which are then employed to train deep neural networks. This representation offers a concise and flexible depiction of images. By converting images into either dominant points or contour coordinates, we substantially reduce the computational burden associated with processing large image datasets. This reduction not only accelerates the training process but also conserves computational resources, rendering our approach suitable for real-time applications and resource-constrained environments. Additionally, these representations facilitate improved generalization of the trained models. Both dominant points and contour coordinates inherently capture essential features of the input images while filtering out noise and irrelevant details, providing an inherent regularization effect that mitigates overfitting. Our approach results in lightweight models that can be efficiently deployed on edge devices, making it highly applicable for scenarios with limited computational resources. Despite the reduced complexity, our method achieve performance comparable to state-of-the-art methods that use full images as input. We validate our approach through extensive experiments on benchmark datasets, demonstrating its effectiveness in reducing computation, preventing overfitting, and enabling deployment on edge computing platforms. Overall, this work presents a methodology in image processing that leverages polygonal representations through either dominant points or contour coordinates to streamline computations, mitigate overfitting, and produce lightweight models suitable for edge computing. These findings indicate that this approach holds significant potential for advancing the field of deep learning by enabling efficient, accurate, and scalable solutions in real-world applications. The code for the experiments of the paper are provided at \\url{https://anonymous.4open.science/r/PolygoNet-7374}" }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Shape Classification; Polygonal representation; Computational Efficiency; Self-Attention Mechanism;" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/818aba7578003bf60baa64a11f1b477254548ead.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "PolygoNet: Leveraging Simplified Polygonal Representation for Effective Shape Classification" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x5FfUvsLIE
Large Language Models based Graph Convolution for Text-Attributed Networks
main
Active
Text attributed graphs;Long-context model
other topics in machine learning (i.e., none of the above)
3;3;5;6
4;5;5;4
2;2;2;3
2;2;2;3
3;2;2;3
4.25
4.5
2.25
2.25
2.5
-0.19245
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See the weakness section." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- This paper is very well-written and easy to follow.\n- The proposed method of conducting LLM-based learning on graphs without a GNN component is novel and makes sense.\n- The proposed hash-based structural similarity calculation is novel to me." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates an interesting problem of leveraging LLM for learning on text-attributed graphs. The authors propose a method called SKETCH which adapts LLM for graphs by retrieving both structural and semantic information. To be specific, the semantic-based retrieval is built upon some off-the-shelf pretrained retrievers, and the similarity score is calculated by the embedding similarity search. On the other hand, the structure-based retrieval is designed to fetch related neighbors from the graph with a novel hash-based Jaccard similarity estimation. The semantic similarity score and structural similarity score are merged to select the final neighbors, which are put into the LLM together with the center node for problem-solving. The authors then conduct experiments on three real-world datasets to demonstrate the effectiveness of their proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Some model designs are not well-illustrated. For example, how the sampled neighbors and center text is finally fed into the LLM? What kind of instruction or prompt are you using? Do you train the model or just use do direct prompting?\n\n- Some experiments on larger datasets or other tasks other than node classification can be helpful. The experiments are mainly focused on 10k-size graphs. Can the method be scaled to a large graph with millions of nodes? Node classification might not be enough to demonstrate the strength of the proposed method. It would be interesting to try on some more advanced LLM-based graph reasoning benchmarks [1].\n\n- How many neighbors are finally selected? Is the model performance sensitive to the number of selected neighbors? Is there any scalability issue?\n\n- Typos: (1) “where |S| is the size of the text-attributed nodes” should it be |V|? (2) The equation in line 224 needs one further “=” to be complete.\n\n\n\n[1] Jin, B. Graph chain-of-thought: Augmenting large language models by reasoning on graphs. ACL 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Is the method reproducible? Please provide your repository if possible. \n2. Is the GCN-generated embedding also leveraged in the method? Has the author carefully considered the differences between the proposed method and GCN, or should these differences be discussed further?\n3. The title is **LARGE LANGUAGE MODELS BASED GRAPH CONVOLUTION FOR TEXT-ATTRIBUTED NETWORKS**. Does this imply that feature aggregation based on a weighting mechanism quantified by semantic and structural proximity is a better way to present it?\n4. \"While effective, this method may miss the rich semantic nuances in textual data.\" This is probably a crucial starting point. Are there any references or experiments to support this claim?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Due to the missing semantic information during the message-passing process, the author has proposed a fused framework based on LM and graph heuristics, which is easily scalable.\n2. The author has conducted extensive experiments to demonstrate the performance improvement and computational efficiency." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The author has introduced a new method that combines a pre-trained language model (PLM) with a graph heuristic (Common Neighbor) for semi-supervised node classification. In this approach, the PLM generates semantic proximity by incorporating a weighted sum of token-level embeddings. This semantic information is then fused with local graph structure, such as the common neighbor heuristic, by weighting the local connections according to their semantic proximity." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. When introducing the background of GCN and RAG, some crucial papers are not cited. For instance, the structure of text-attributed graphs encompasses textual information from various nodes. Inspired by concepts from Retrieval-Augmented Generation (RAG) \\cite{one in NLP}{one in graph}{one in tag}, we propose integrating an additional corpus during the training process.\n \n2. The paper requires additional revision for better and more fluent logical flow. For example, GNNs primarily depend on node-level aggregation via graph convolutions to compute weighted sums of neighboring features. While effective, this method may overlook the rich semantic nuances in textual data. Incorporating such nuances enables a more granular understanding of the relationships between tokens, leading to improved flexibility and adaptability.\n\n3. The method is not well presented. In the section on aggregated learning of retrieved content, a weighted sum of semantic and structural proximity is introduced, but the difference from graph convolution is not carefully studied or justified.\n\n4. Empirical Demonstration: The results are reported without running on 5-10 random seeds or multiple data splits.\n\n5. It is not clear how to calculate semantic proximity and structural proximity for a node classification task." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The foundational work of Retrieval-Augmented Generation (RAG) [1] is not cited. Given that the primary contribution of this paper lies in graph retrieval-augmented generation, it is crucial for the authors to provide a comprehensive discussion of significant prior works [2-4] in related fields.\n\n2. Only texts in documents (nodes) are used, and connections (graph structure) between texts are not considered in the generation phase. The authors presents several drawbacks in TAG modeling, such as \"the text representations and graph structure are trained independently from their respective aspects, potentially resulting in sub-optimal integration between the two modalities\" and \"the separate processing stages do not take into account the simultaneous optimization of the two data types, resulting in information loss and reduced robustness\". Could the authors clarify how SKETCH addresses these challenges?\n\n3. The paper lacks implementation details and accessible code. How do authors fine-tune LLMs? What is train / val / test split? What is the searching space for each hyperparameter, e.g., $k$-hop? The reproducibility claims in the article are not convincing. The claim that \"the results and related analysis reported in the paper are only a summary of those available in the code\" is ambiguous.\n\n4. What is $G$ in $G = R_{sum} + R_{struct}$?\n\n5. It is fair to use frozen / fine-tuned LLMs as baseline. However, comparing the proposed model with tailored TAG models that do not utilize external knowledge bases may be unfair. Why not include RAG-based approaches for TAGs?\n\n6. What is external knowledge data used for each dataset?\n\n7. What are promps used for proposed model (SKTECH)? Are the prompts employed for the LLM baseline the same as those used for SKETCH?\n\n8. Why SKTECH with Nomic (127M parameters) perform better than SKTECH with Llama3 (8B parameters) on Wikipedia when Nomic has much fewer parameters?\n\n---\n[1] Lewis, Patrick, et al. \"Retrieval-augmented generation for knowledge-intensive nlp tasks.\" Advances in Neural Information Processing Systems 33 (2020): 9459-9474.\n\n[2] He, Xiaoxin, et al. \"G-retriever: Retrieval-augmented generation for textual graph understanding and question answering.\" arXiv preprint arXiv:2402.07630 (2024).\n\n[3] Hu, Yuntong, et al. \"GRAG: Graph Retrieval-Augmented Generation.\" arXiv preprint arXiv:2405.16506 (2024).\n\n[4] Edge, Darren, et al. \"From local to global: A graph rag approach to query-focused summarization.\" arXiv preprint arXiv:2404.16130 (2024)." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The retrieval-based approach offers a fresh perspective on handling TAGs.\n\n2. The model leverages hash-based similarity estimation to reduce computational costs in multi-hop similarity estimation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces SKETCH, a novel framework for handling text-attributed graphs (TAGs) based on retrieval-augmented generation, enhancing large language models (LLMs) for TAG-related tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Work on graph retrieval-augmented generation, which is closely related to the topic of this paper, is not discussed.\n\n2. Lack of implementation details such as hyparameter searching space, prompts used, etc." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Could you release the train/test/val splits of the three datasets? I haven't found it in Appendix A.1 and main text.\n- Could you provide more explanation on the claim that SKETCH requires fewer computational resources than other baselines?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The proposed method is well motivated and easy to follow\n- SKETCH givens a new perspective to integrate LLMs and the graph task\n- The writing and presentation of this paper is clear" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed Semantic Knowledge and Structural Enrichment framework (SKETCH) to extract the semantic and structural related information from the graph to help graph understanding and reasoning. The conducted experiments show that SKETCH could enhance the model's performance on three graph datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The evaluation is only limited to 3 datasets with less than 10 classes. InstcurtGLM [1] was evaluated on Ogn-arxiv, while GraphFormers [2] was evaluated on Product, DBLP and Wiki.\n- The improvement from Llama3-8b+GraphSAGE scenario is marginal.\n- SKETCH requires extensive hyper-parameters tuning compared to existing graph based methods (such as Llama3-8b+GraphSAGE).\n\n[1] Ye, Ruosong, et al. \"Natural language is all a graph needs.\" arXiv preprint arXiv:2308.07134 4.5 (2023): 7.\n\n[2] Yang, Junhan, et al. \"Graphformers: Gnn-nested transformers for representation learning on textual graph.\" Advances in Neural Information Processing Systems 34 (2021): 28798-28810." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "we propose a novel framework to adapt the long-context model for graph learning by retrieving both structural and text-related content." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024large,\ntitle={Large Language Models based Graph Convolution for Text-Attributed Networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x5FfUvsLIE},\nnote={under review}\n}" }, "abstract": { "value": "Text-attributed graph (TAG) tasks involve analyzing both structural information and textual attributes. Existing methods employ text embeddings as node features, and leverage structural information by employing Graph Neural Networks (GNNs) to aggregate features from neighbors. These approaches demand substantial computational resources and rely on two cascaded stages, limiting scalability in large-scale scenarios and making them vulnerable to the influence of irrelevant neighboring nodes. The advancement of language models (LMs) presents new avenues for tackling this task without GNNs, leveraging their ability to process text attributes of both the target node and its important neighbors. Instead of using graph convolution modules, LMs can assign weights to these tokens based on relevance, enabling token-level weighted summarization. However, it is nontrivial to directly employ LMs for TAG tasks because assessing the importance of neighbor nodes involves both semantic and structural considerations. Additionally, the large search space presents efficiency issues for computing importance scores in a scalable manner.\nTo this end, we propose a novel semantic knowledge and Structural Enrichment framework, namely SKETCH, to adapt LMs for TAG tasks by retrieving both structural and text-related content. Specifically, we propose a retrieval model that identifies neighboring nodes exhibiting similarity to the target node across two dimensions: structural similarity and text similarity. To enable efficient retrieval, we introduce a hash-based common neighbor estimation algorithm for structural similarity and a nearest-neighbor recalling algorithm for embedding similarity. These two similarity measures are then aggregated using a weighted rank aggregation mechanism. The text attributes of both the retrieved nodes and the target node provide effective descriptions of the target node and are used as input for the LM predictor. Extensive experiments demonstrate that SKETCH can outperform other baselines on three datasets with fewer resources." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Text attributed graphs", "Long-context model" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2ce49d281ee362303d95e4628c899125defa5dd1.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Large Language Models based Graph Convolution for Text-Attributed Networks" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x5YEibapUM
Unlocking the Theory Behind Scaling 1-Bit Neural Networks
main
Active
1-bit neural network;quantization;neural tangent kernel
learning theory
3;3;3;5
3;4;3;4
2;2;2;3
2;1;3;2
2;2;2;2
3.5
3.5
2.25
2
2
0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see the weaknesses. I'll ask more questions after looking at the author's rebuttal." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The topic this paper tries to push is timely, as there have been several works in Quantization Aware Training of low-bitwidth models recently, which show how these models start working better at scale.\n\n2. This paper takes an interesting NTK perspective to justify training dynamics and generalization of binary neural networks. They also derive a scaling law of 1-bit neural networks under certain assumptions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper provides theoretical justification for scaling 1-bit neural networks, showing that their training dynamics converge to kernel-like behavior as model width increases. The authors demonstrate that the training loss can become arbitrarily small with sufficient network width and that the generalization difference between 1-bit and full-precision networks shrinks with scale. Using the Neural Tangent Kernel (NTK) framework, the authors guarantee training convergence to minimal loss. Preliminary empirical results confirm that 1-bit models perform comparably to full-precision networks on complex tasks, with significant efficiency gains. This work suggests that scaling quantization aware training of 1-bit neural networks are a promising direction for efficient deep learning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The main weakness of the paper is the lack of realistic experiments and the positioning of the paper. The authors use 1-bit LLMs to motivate the paper in the abstract and introduction, but their theory and experiments are for quantized MLPs at a very small scale. The authors should rewrite the abstract and introduction to clarify what their focus is on rather than overemphasizing low-bitwidth LLMs. For example, authors should explicitly state their focus is on quantized MLPs for learning functions, and clarify how their results may or may not extend to large-scale LLMs.\n\n2. In section 3.2, the authors mention $f$ is a two layer attention model; however, the equations suggest it’s a two-layer MLP. Symbols like $m$ are not defined clearly. This section needs better clarity.\n\n3. The choice of experiments (learning rigorous functions) seems arbitrary to me, given that the motivation of the paper is 1-bit LLMs. Can the authors explain this disparity?\n\n4. Authors should look at related papers that try to empirically derive the scaling law of low-bit (binary and ternary) quantization-aware training of LLMs like [1], which might make the arguments in their paper stronger. They also provide the model checkpoints and losses at various scales for 1-bit and 1.58-bit LLMs. I think if the authors want to position their paper as validating the scaling of low-bit LLMs using their theory, they can use the existing checkpoints from this work.\n\n[1] A Kausal et al. Spectra: Surprising Effectiveness of Pretraining Ternary Language Models at Scale" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper studies an important topic of training quantized networks, from a theoretical point of view.\n- Showing convergence of 1-bit network with quantization aware training is, to my knowledge, novel.\n- Studying the relation between 1-bit network and full-precision networks is interesting and important." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The work studies theoretically the power of 1-bit networks, using NTK theory. Specifically, the authors bound the convergence time for training 1-bit quantized network using Quantization Aware Training, where the network is quantized in the forward pass, and computes the gradient with a straight-through estimator in the backward pass. They also relate the convergence guarantees to scaling laws for 1-bit networks, and complement these scaling laws with experiments. Additionally, the authors bound the difference between the quantized and non-quantized network at initialization and during training." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Lemma 4.1: the results in the lemma depend on $\\lambda$, which I assume is $\\lambda_{\\min}(H^*)$ (this should be properly stated). However, if this is the case $\\lambda$ should depend explicitly on $\\kappa^2$, and the dependence on $\\kappa^2$ should be tracked throughout the results. It is better to define $\\lambda$ as the minimal eigenvalue of the non-scaled Gram matrix, and have the explicit dependence on $\\kappa^2$ in the results.\n- It is worth noting that the scaling laws derived in proposition 4.3 are very different form than the Kaplan scaling laws (e.g., $N$ and $D$ in the Kaplan scaling laws are in the denominator with fractional power, and are not exponentiated).\n- The authors claim that the function implemented by the quantized network approaches the function computed by the full-precision network as the network size grows, but I do not see how this is reflected in the results. Specifically, it is worthwhile to write down an explicit bound on the difference between the two networks that goes to zero with $m$, when keeping other parameters (e.g. dataset size and network scale) constant. I currently did not see such a bound, but maybe I am missing something.\n- Related to the above, the main results that relate the quantized and non-quantized networks (Theorems 5.1 and 5.2) rely on the scaling parameter $\\kappa$ being very small. However, clearly taking $\\kappa$ to zero makes both functions go to zero, and thus trivially the difference between the two functions becomes small. But this is also true for any (bounded) function multiplied by $\\kappa$, so it seems that these results trivially hold. Do the results simply imply that all functions go to zero with $\\kappa$ and thus become close, or can these result hold when both functions are bounded away from zero (e.g. by a constant)?\n- The functions studied in 6.1 are relatively low-dimensional, which may mean that they can be approximated easily in the kernel regime by any kernel. Did you try experimenting with functions of higher dimensions?\n\nAdditionally, there are some unclear sentences/phrases in the paper, e.g.:\n- Lines 392-395, these two sentences are not clear.\n- Line 347\n- Line 260-261\n- Line 422: \"we aimed to learn rigorous functions\" - what does \"rigorous function\" mean here?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Could the authors provide evaluation on practical models, e.g., GPT-2 and validate the proposed scaling law?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- This paper studies an important problem as to scaling 1-bit neural networks. Given the increasing deployment of large foundation models, how to reduce their energy consumptions become an increasingly important problem. 1-bit quantization is a promising approach to study.\n- The authors conduct thorough experiments to validate the theoretical analysis." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper conducts a theoretical analysis on scaling 1-bit neural networks. The authors use the NTK framework to derive a scaling law formula for 1-bit quantized networks. Experiments on simulating target functions demonstrate the effectiveness of the proposed scaling law." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- This paper studies the scaling law of 1-bit neural networks. However, the analysis is performed on simple toy models. The original scaling law as proposed by Kaplan [1] is conducted on Transformers with millions/billions of parameters with huge amounts of pretraining compute. The goal of scaling law should be to study how to reliably predict the performance of pretraining with larger compute. However, the setting of this paper does not fit in the category of “scaling law”, given the small scale experiments conducted on toy models. \n- The experiments do not support the claim that there is a scaling law for 1-bit quantized neural networks. The authors do not demonstrate the exact scaling law formula, e.g. what are the values of hyperparameters in proposition 4.3 when fitting this curve to the experiments in Figure 1. Obtaining the coefficient in the scaling law is of practical importance, which can be used to predict performance of training larger 1-bit quantized models. \n- What is the justification for using NTK for studying this problem? The introduction of section 3.3 seems rather abrupt, which lacks the discussion for motivation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See Weaknesses" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1) The paper introduces a variation of the soft-commite machine that can be analytically traced to show the convergence of 1-bit models.\n\n2) The convergence is analysed for the number of model parameters and the size of the training data set." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper shows that a modified 1-bit soft-commite machine converges to an arbitrarily small loss as the width of the hidden layer goes to infinity. It also gives a theoretical upper bound on the rate of convergence.\nThey also show that a 1-bit model can be trained to estimate complex functions with high accuracy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) Given that the paper provides numerical results, I found it problematic that the main result of the paper (the functional form of the convergence of the training loss of 1-bit models) is not verified with numerical results. \n\n2) The finding of a \"scaling law\" in Fig.1 with four data points, no comparison with a theory curve, and significant fluctuations is, in my opinion, not a valid verification of Proposition 4.3 at all.\n\n3) The figures are not properly described/referenced in the text. \n\n4) The introduction of $\\kappa$ in the model seems a bit arbitrary, especially considering that it is necessary to \"plug an appropriate value of $\\kappa$\" (Line 391) into the model to ensure learning.\n\n5) The abstract does not adequately explain what is being done in the paper. Instead of citing previous results in the abstract, I would like to see the model and a more precise definition of the results there." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024unlocking,\ntitle={Unlocking the Theory Behind Scaling 1-Bit Neural Networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x5YEibapUM},\nnote={under review}\n}" }, "abstract": { "value": "Recently, 1-bit Large Language Models (LLMs) have emerged, showcasing an impressive combination of efficiency and performance that rivals traditional LLMs. Research by Wang et al. (2023); Ma et al. (2024) indicates that the performance of these 1-bit LLMs progressively improves as the number of parameters increases, hinting at the potential existence of a *Scaling Law for 1-bit Neural Networks*. In this paper, we present the \\emph{first theoretical} result that rigorously establishes this scaling law for 1-bit models. We prove that, despite the constraint of weights restricted to $\\\\{-1, +1\\\\}$, the dynamics of model training inevitably align with kernel behavior as the network width grows. This theoretical breakthrough guarantees convergence of the 1-bit model to an arbitrarily small loss as width increases. Furthermore, we introduce the concept of the generalization difference, defined as the gap between the outputs of 1-bit networks and their full-precision counterparts, and demonstrate that this difference maintains a negligible level as network width scales. Building on the work of Kaplan et al. (2020), we conclude by examining how the training loss scales as a power-law function of the model size, dataset size, and computational resources utilized for training. Our findings underscore the promising potential of scaling 1-bit neural networks, suggesting that int1 could become the standard in future neural network precision." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "1-bit neural network", "quantization", "neural tangent kernel" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a2abe47d145314b7cd07f12f1275caf9ca7f15bb.pdf" }, "presentation": null, "primary_area": { "value": "learning theory" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/f647d5e89edaaa41d6b8ae20fcdc7bba5b42fd54.zip" }, "title": { "value": "Unlocking the Theory Behind Scaling 1-Bit Neural Networks" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x5hXkSMOd1
SANER: Annotation-free Societal Attribute Neutralizer for Debiasing CLIP
main
Active
Societal bias;CLIP;Debiasing;Fairness
alignment, fairness, safety, privacy, and societal considerations
5;5;8;8
3;3;4;4
2;2;3;3
2;2;4;3
3;2;3;4
6.5
3.5
2.5
2.75
3
1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- The caption in Figure 2 should contain ideally an explanation of what is seen in the figure. At least a reminder of the loss terms would be helpful to avoid having to jump back and fourth through the paper.\n\n- How does Saner apply to non-binary gender? Does it properly use general pronouns like they?\n\n- Conceptually, it seems hard to draw a line between terms related only to the gender and to the gender and a concept. Should de-biasing replace terms like pregnant? There is no male counterpart of this so replacing it will probably remove important information.\n\n- Is the \"multilayer perception\" in line 81 supposed to be a multilayer perceptron?\n\n- The definition of a dataset in Line 104 seems odd in the sense that, if a is a protected attribute then this dataset should only contain data with protected attributes. Is that intended? I would imaging the goal is to train on a large dataset which can have samples with protected attributes but not all samples have to have protected attributes?\n\n- One added sentence for 2.2 on how a orthogonal projection is helping debiasing would be helpful for understanding.\n\n- How hard is it to re-train clip? When the semantics are lost the regularization loss may be high. How computationally heavy is it to get to the same or a similar performance level?\n\n- Table 2's caption should probably read \"racial bias\" and not recial bias." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is very clearly written. Whenever an uncertainty comes up about a term footnotes help understanding.\n\nThe analysis of debiasing techniques is very clearly presented that it could be used for a tutorial. The experimental setup is well done, the evaluation metrics such as measuring the difference between a uniform distribution and a potentially gender biased image generation seems adequate for the task. Showing generated images, i.e. with Stable Diffusion shows also a very immediate practical application of this research.\n\nThe figures help understanding of the method even though the caption could be changed to make them more self-complete.\n\nIn general, this seems a very well written, easy to understand paper addressing a very current need." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to overcome societal bias present in datasets used to train large scale multi-modal models like CLIP. The authors provide a study of debiasing methods and explain how these struggle with loosing important attribute information, e.g. gender, or depend on attribute annotations which are hard to get and even harder to get in the form of a large diverse dataset. Their contributed method Saner has four components which together aim to remove biases: attribute neutralization, feature modification and attribute annotation free debiasing and regularization losses. The paper compares the results on quite a few tasks such as text-to-image generation, text-to-image retrieval and zero-shot image classification." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The attribute groups will be limited to a specific set of defined attributes, which need to be agreed upon and could be debatable, e.g. \"pregnant\" (If I understand list C in the appendix correctly). There may also be missed attributes. This does not seem like a big issue but could be one in an adversarial setting. \n\nIn general, the benefits of not needing a annotated dataset with attributes seems to be bought by needing a general list of attributes. This may not be a weakness per se but it would be interesting to either read about it in the limitations or have the author comment on why this is not a limitation.\n\nThere are general implementation details about the used architecture but none about the training process. Re-training clip with a loss to keep it's performance while having a re-projection that changes the semantics seems complex. Information about the used GPU would help understand how hard training for 5 epochs is.\n\nComparison against other state of the art in this small field is limited but understandably so and the author's did a good job re-implementing relevant approaches." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "Yes, Discrimination / bias / fairness concerns" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see the weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The problems of existing methods are straightforward, and the authors conduct several experiments to verify these phenomena.\n- The proposed method has a good performance.\n- The experiments are conducted on both text-to-image retrieval and generation tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to address societal biases in large-scale vision-language models, such as CLIP. The authors identify two limitations of existing methods: the loss of attribute information and the reliance on attribute annotations. To overcome these challenges, the authors propose SANER, a simple yet effective debiasing method for CLIP that incorporates attribute neutralization and an annotation-free debiasing loss. Extensive experiments are conducted to verify the method's effectiveness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Could the authors provide ARL results to verify the loss of attribute information? Besides, do other existing methods, such as Mapper and Prompt tuning-based debiasing, also have the lossing attribute information when debaising?\n- Although effective in debiasing CLIP, the proposed method appears ad-hoc for this specific task and lacks major technical contributions, as most components seem to be existing technologies or tricks. Therefore, this paper may not have significant influence or provide broader insights beyond the CLIP-debiasing task.\n- Why does adding only contrastive losses have a significantly negative impact on FairFace performance (as shown in Table 7)?\n- Is the proposed method only applicable to CLIP? Could the authors test other VLMs to demonstrate its broader effectiveness?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The two identified challenges, especially the loss of attribute information, sound critical and interesting. It seems like the pipeline only involves lightweight training, as only the debiasing layer is trained. The pipeline is evaluated on two different downstream tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposed a CLIP debiasing pipeline that aims to address two existing challenges: 1. loss of attribute information and 2. Dependency on attribute annotations. Specifically, the paper focuses on adjusting the text feature by replacing the protected attributes with attribute-neutral terms and further regularizing the adjusted feature by two reconstruction losses. The pipeline is evaluated on Text-to-Image retrieval and generation tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Although replacing the protected attribute words with an attribute-neutral word is a sound neutralization method, it requires a comprehensive list of the protected attributes (as listed in Appx. C). However, in real practice, and especially in non-binary cases, it will be very hard or nearly impossible to have a complete list. Further, since the \"attribute annotation-free debiasing loss\" relies on a set of attribute-specific descriptions, creating the attribute-specific descriptions set may lead to some concerns. For example, if the set is created by iterating all possible words and all possible groups (e.g., man, woman, male, female, he, she, etc), then the size of the set will be overlarge and increase the computation complexity. On the other hand, if only one word is selected for each group (e.g., man + female, woman + male), then how to select matching words from groups may also affect the debias performance. Also, is it possible that the generated description may have unmatched text (e.g., A woman/boy is eating salad) or text that has grammar errors (e.g., A he/she is eating salad, according to Appx. C). I would like to see more discussion on the text generation. \n\n2. The proposed pipeline tried to address the challenge by combining the debiasing loss with two extra regularization losses. The debiasing loss tries to place the adjusted feature of the attribute-neutralized text in the middle point of several features of the attribute-specific text, which seems to be fine. However, the regularization losses try to project the attribute-neuralized feature to both text and visual features of an attribute-specific feature, which seems strange. As demonstrated in Figure 2, the input to the \"Text encoder\" has been neutralized, and the attribute-specific information has been removed. Further, the debiasing layer is also trained to adjust the features to be further neutral. Thus, even there is a residual connection in the \"Feature modification\" process, the output feature should not contain any information regarding the specific attribute. Then, how is it possible that such a feature can be mapped to either text or visual features that contain specific attribute information to avoid \"the loss of attribute information\"? Is it possible that the \"retained attribute information\" is some biased information from the dataset that is implicitly polluted by the regularization loss?\n\n3. For the evaluation of the text-to-image generation, the paper only provides numerical results on debiasing performance and some visualizations but does not provide any image generation imperial results, like FID. Also, only a few SOTA are used as the baseline, which makes the comparison weak. However, this may be understandable as several baselines do not have public implementation, and it seems like the paper prepares all the baseline results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Have you considered running experiments with LLaVA-like models? I don't think it would be terribly difficult to replace the clip model with the proposed one, run standard llava experiments (there are a few githubs that are easy to setup for such eval), and report the results. As for debiasing experiments, I've seen people use PATA and FairFACE and just look at the next token predicted to get similar scores as the CLIP style. Showing this CLIP replacement method works on text generation as well as image generation would very much increase the impact of the work, in my opinion. However, I realize this experiment may best be left for future work." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "I will start off by saying that the lede of the paper was buried: section 5 as an experiment is astounding. I absolutely love the drag and drop replacement of this debiased model into an existing model. The idea that, 1. you don't have to retrain the text-to-image model and 2. that you don't have to retrain CLIP from scratch is incredibly compelling. I recommend the abstract/intro be rewritten to highlight this experiment.\n\nOn another note, the paper is generally well-written and easy to follow. The problem of bias in VLMs is highly prominent and this work makes an excellent contribution to the field. I am fairly familiar with this field of debiasing clip and I feel section 2.1 does a good job summarizing the various papers." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a method for finetuning a CLIP model to remove social bias by an intuitive training scheme." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I don't have too many issues with the paper. One thing I would like to have seen is a more automated method for text neutralization. While word level ablation is intuitive and makes sense for the social bias angle, a less hardcoded approach seems like it would be needed for a general-purpose debiasing solution, especially for concepts that are hard to ablate at the word level.\n\nThere's a dataset called FACET that addresses some issues with PATA and FairFace. If there's time, it would be great to run experiments on it as well. This would cement the proposed method as useful and cutting edge in my opinion." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose SANER, a debiasing method for CLIP that removes societal bias without requiring attribute annotations, while preserving attribute-specific information." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024saner,\ntitle={{SANER}: Annotation-free Societal Attribute Neutralizer for Debiasing {CLIP}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x5hXkSMOd1},\nnote={under review}\n}" }, "abstract": { "value": "Large-scale vision-language models, such as CLIP, are known to contain harmful societal bias regarding protected attributes (e.g., gender and age). In this paper, we aim to address the problems of societal bias in CLIP. Although previous studies have proposed to debias societal bias through adversarial learning or test-time projecting, our comprehensive study of these works identifies two critical limitations: 1) loss of attribute information when it is explicitly disclosed in the input and 2) use of the attribute annotations during debiasing process. To mitigate societal bias in CLIP and overcome these limitations simultaneously, we introduce a simple-yet-effective debiasing method called SANER (societal attribute neutralizer) that eliminates attribute information from CLIP text features only of attribute-neutral descriptions. Experimental results show that SANER, which does not require attribute annotations and preserves original information for attribute-specific descriptions, demonstrates superior debiasing ability than the existing methods." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Societal bias", "CLIP", "Debiasing", "Fairness" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/3ac1d2afcfe95d6aad2bbb3e3a5c69095a0431cc.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "SANER: Annotation-free Societal Attribute Neutralizer for Debiasing CLIP" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x5l5PRtvul
Hybrid Kernel Stein Variational Gradient Descent
main
Active
Stein Variational Gradient Descent;Approximate Inference;Particle-based Variational Inference;Gradient Flow
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
5;5;6;8
3;3;3;2
4;2;3;3
3;3;3;3
2;3;3;3
6
2.75
3
3
2.75
-0.942809
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "No further questions." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The theoretical foundation of h-SVGD is established, which is new and important to this research topic.\n\n2. Besides the new results, some existing results are proved with relaxed and more practical assumptions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed studied the theoretical foundation of the hybrid kernel Stein variational gradient descent (h-SVGD) method, which is a variant of the vanilla SVGD method. Specifically, the authors demonstrated the ability of h-SVGD to alleviate variance collapse, and showed the existence of a solution to the hybrid Stein partial differential equation for h-SVGD. They also showed that h-SVGD does not converge to the target distribution in the mean field limit. Experiments have been provided to show the promising properties of h-SVGD." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "To be honest, since I am not familiar with the mathematical tools used in this work, I can only judge the paper based on what the authors have done, as they claimed, and I cannot say much about its weaknesses." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- The result in Corollary 3.4 indicates that h-SVGD does not guarantee the distribution of particles converges to the target distribution. While this is reasonable, the metrics commonly used in Bayesian Neural Network (BNN) tasks, such as test RMSE and test LL, do not reflect this limitation. This raises the question of how to interpret the advantages of h-SVGD despite this theoretical constraint." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- This paper provides a relatively comprehensive theoretical foundation for the empirical method hybrid kernel variant of SVGD.\n- This paper offers a clear explanation of the relationships and distinctions between the theoretical results of h-SVGD and those of SVGD." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores the theoretical understanding of the hybrid kernel variant of Stein variational gradient descent (h-SVGD). Specifically, it provides a kernelized Wasserstein gradient flow representation for h-SVGD and, building upon this, offers the dissipation rate and large particle limit of h-SVGD. In numerical experiments, the authors demonstrate that h-SVGD significantly improves variance collapse compared to vanilla SVGD." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The experimental results presented in the paper offer relatively limited support. In results on the Protein and Kin8nm datasets, SVGD does not appear to suffer from variance underestimation issues, yet its performance is still inferior to that of h-SVGD. This observation suggests that the advantage of h-SVGD in BNN tasks may not primarily stem from mitigating variance underestimation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- In Assumption (A2), the first part can be deduce from Assumption (A1). In fact, Assumption (A1) induce that $V$ is bounded. Why do you precise this assumption ?\n- Can you give an interpretation of the second part of Assumption (A2) ?\n- Can you give an interpretation of Assumption (A3) ?\n- Do you have examples of potentials $V$ that verifies Assumptions (A1)-(A2)-(A3)-(A4) ?\n- According to your analysis of h-SSVD you have a proposition of a sampling strategy that does not suffer from dimensional collaps and sample the right distribution ?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "The paper provides interesting theoretical results on h-SVGD. It extend the known results on SVGD to h-SVGD. The experimental results effectively support the interest of h-SVGD." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Ths paper gives more theoretical insights about a sampling method named hybrid kernel stein variational gradient descent (h-SVGD). It is demonstrated that this method effictively sample a law which is not the target law (but linked to it). Moreover, it provides a descent result and a discretization quantification. Finally, it presents experiments that show the empirical interest of h-SVGD." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Major weaknesses:\n- Assumption (A1) does not seem to be well written. To suppose that $\\lim_{|x|\\to +\\infty} V(x) = +\\infty$ must be a more realistic assumption to ensure that $\\pi$ could be a probability measure. Is it a writting error ?\n- There is a few intuitions and comments in the paper, making the paper hard to read. In particular, I suggest to discuss the assumptions in details. A reminder on Wasserstein gradient flow will be appreciable. Finally, Proposition 3.6 is not comment.\n\nMinor weaknesses:\n- Equation 6 : there is a typo, it might be $(x, \\cdot)$ instead of $(\\cdot, x)$.\n- line 198 : in the paper notations, it might be $\\mathcal{H}$ instead of $\\mathcal{H}_1$.\n- The remark in line 246 seems inconsistent because Assumption (A1) induce that $V$ is bounded.\n- In line 250, the constant $L$ is said to be dependent of $k_1, k_2$ and $p$. However it is not clear what is the variable $p$.\n- In line 329, the normal density do not verify Assumption (A1). In fact in this case $V$ is quadratic.\n- line 226, I suggest to be more precise about the sense of \"symetric function\" in this context." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- The authors mention convergence issues in the mean field limit. Are there any ways to mitigate this bias?\n- The authors focus on $k_2 = ck_1$ case. Have they tried using two completely different kernels?\n- The authors reference h-SVGD's promising results on image classification. Have they conducted similar experiments in this paper?\n- From an applications perspective, why is it important to avoid variance collapse?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Theoretical contributions are rigorous and sound, including the existence of a solution to the hybrid Stein PDE and the establishment of the descent lemma. A kernelised Wasserstein gradient flow interpretation is thoroughly discussed with $k_2 = ck_1$.\n- The experiments involve diverse datasets and metrics.\n- The representation of experiments is clear and the paper is well-structured.\n- Good to quantify that h-SVGD does not add additional computational costs to SVGD." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper discusses h-SVGD as a method to address variance collapse by using separate kernels for the driving and repulsive terms. The authors extend the theoretical aspects of the original paper and demonstrate that h-SVGD does not converge to the target distribution in the mean field limit. The empirical results support the theoretical findings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The title may be a bit misleading as it suggests that h-SVGD is a novel algorithm introduced in this paper, but it was previously proposed. For instance, it can reflect the paper's focus on theoretical analysis and empirical evaluation of h-SVGD, rather than introducing it as a new method.\n- The main focus on $k_2 = ck_1$ may be somewhat limiting, as it potentially restricts the exploration of using truly distinct kernels for the driving and repulsive terms.\n- While RMSE and LL have been assessed in Appendix B, h-SVGD does not show clear benefits over SVGD in these metrics. It would be better to provide a more detailed discussion in the main text about why DAMV is an appropriate metric for evaluating h-SVGD's performance, especially in relation to the variance collapse problem.\n- Using \"S-SVGD\" and \"SSVGD\" interchangeably is slightly confusing and could be more consistent." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024hybrid,\ntitle={Hybrid Kernel Stein Variational Gradient Descent},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x5l5PRtvul},\nnote={under review}\n}" }, "abstract": { "value": "Stein variational gradient descent (SVGD) is a particle based approximate inference algorithm. Many variants of SVGD have been proposed in recent years, including the hybrid kernel variant (h-SVGD), which has demonstrated promising results on image classification with deep neural network ensembles. In this paper, we demonstrate the ability of h-SVGD to alleviate variance collapse, a problem that SVGD is known to suffer from. Unlike other SVGD variants that alleviate variance collapse, h-SVGD does not incur additional computational cost, nor does it require the target density to factorise. We also develop the theory of h-SVGD by demonstrating the existence of a solution to the hybrid Stein partial differential equation. We highlight a special case in which h-SVGD is a kernelised Wasserstein gradient flow on a functional other than the Kullback-Leibler divergence, which is the functional describing the SVGD gradient flow. By characterising the fixed point in this special case, we show that h-SVGD does not converge to the target distribution in the the mean field limit. Other theoretical results include a descent lemma and a large particle limit result. Despite the bias in the mean field limiting distribution, experiments demonstrate that h-SVGD remains competitive on high dimensional inference tasks whilst alleviating variance collapse." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Stein Variational Gradient Descent", "Approximate Inference", "Particle-based Variational Inference", "Gradient Flow" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/58133e099f583da04b9c13de13226d65cafa8dae.pdf" }, "presentation": null, "primary_area": { "value": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Hybrid Kernel Stein Variational Gradient Descent" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x6YSsKYJuH
TuBA: Cross-Lingual Transferability of Backdoor Attacks in LLMs with Instruction Tuning
main
Active
backdoor attacks;cross-lingual transfer;LLMs
foundation or frontier models, including LLMs
5;5;5
5;3;4
3;3;3
3;3;2
3;3;2
5
4
3
2.666667
2.666667
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to the \"Weaknesses\" section for further information." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper presents an interesting question, namely whether backdoor can transfer between multiple languages. The paper conducts a large number of experiments and finds that multi-language models, when fine-tuned with data from a few languages, also affect the performance of other languages.\n\n- The organization of the paper is very complete, including chapters on attack settings and objectives, defenses against the proposed attack method, etc.\n\n- The paper conducts experiments on some closed-source models (such as gpt-4o) to verify the practical impact, which is of great significance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper leverages instruction tuning and explores the backdoor transfer abilities of large language models across multiple languages. The paper empirically analyzes and finds that some multi-language large models achieve over 90% attack success rates in more than 7 languages. These findings underscore a widespread and language-agnostic vulnerability that threatens the integrity of MLLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- This paper seems to only conduct empirical research? I think this kind of contribution may not be sufficient for top-level deep learning conferences such as ICLR. While this research is very interesting, I think that the findings of this study may not have very far-reaching significance. Can we further explore the underlying issues? For example, what deeper implications does this phenomenon reflect, and how can we improve the robustness of models in response to such phenomena?\n- The empirical research in this paper only includes the scenario of instruction tuning, which seems insufficient for an empirical study. We know that the backdoor community has proposed a large number of methods, and there are also various ways of applying large language models. Is the phenomenon revealed in this paper widely prevalent?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to weaknesses" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper was the first to investigate the cross-lingual transferability of backdoor attacks on LLM instruction tuning. The experiments for the cross-lingual attack effectiveness are well-designed, and the results are presented clearly in 6 European and 6 Asian languages (5,600 instances for each language)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates the cross-lingual transferability of backdoor attacks in the instruction tuning of large language models (LLMs). The work demonstrates that backdoor attacks can effectively transfer across different languages and attack settings/objectives, even when the poisoned instruction-tuning data for specific languages is fixed (one or two). The authors provide experimental results with high attack success rates on models like mT5 and GPT-4o, across 26 different languages." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The motivation of cross-lingual backdoor attack is week, because it seems that taking the same/similar threat model with the existing backdoor attacks on instruction-tuning (Inject a few ratio of poisoning data into the fine-tuning dataset).\n\n2. Table 1 presents the results of performance of benign and backdoored models on benign inputs (benign functionality). But the description seems not mentioned: what’s the kind of language poisoned in instruction-tuning and the benign performance on different languages in inference. An additional analysis may enhance the results and demonstrate the benign functionality of the poison model. \n\n3. The evaluations on different LLM tasks (e.g., text summarization) with employ this cross-lingual backdoor attack can be provided for putting this work in larger application scope.\n\n4. This paper provides attack results on cross-lingual transferability but lack of sufficient explanation to this property." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "Yes, Privacy, security and safety" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Could the author explain in detail the definitions of English refusal and In-language refusal?\n\n2. Could the author elaborate on the practical scenarios of backdoor attacks? Typically, adversaries inject backdoors into traditional PLMs, particularly in text classification tasks. For example, attackers may exploit triggers to evade spam detection systems. However, in generation tasks involving LLMs, the impact of an attack initiated by the adversary appears weaker compared to that initiated by the user. Research indicates that using instruction or scenario triggers can be more harmful than employing covertly selected triggers by attackers (see references [1-3]). In other words, what does it mean when a cross-linguistic backdoor is activated by an attacker? I believe it is more practical when users activate it, as attackers are the direct beneficiaries.\n\n**Reference**\n\n[1] TrojanRAG: Retrieval-Augmented Generation Can Be Backdoor Driver in Large Language Models\n\n[2] Backdooring Instruction-Tuned Large Language\nModels with Virtual Prompt Injection\n\n[3] Watch Out for Your Agents! Investigating Backdoor Threats to\nLLM-Based Agents" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Instruction fine-tuning-based backdoor exposes vulnerabilities in multilingual LLMs.\n2. The extensive evaluation effectively demonstrates the transferability of poisoned language to other languages.\n3. Well-written and readable." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on cross-lingual backdoor attacks against multilingual LLMs via instruction tuning. Extensive evaluations show that poisoned one or two languages will affect the outputs for languages whose instruction-tuning data were not poisoned." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**1. Lack of a theoretical proof or interpretable analysis**\n\nAlthough the authors demonstrate the backdoor portability of the MLLM model through extensive evaluation, they do not demonstrate why portability exists in terms of methodology and interpretability. This is important for exposing the vulnerability of instruction tuning on MLLM.\n\n**2. Lack of novelty**\n\nSimilarly, due to the lack of instruction fine-tuning against MLLM vulnerability analysis (refer to Weakness 1), the work appears to be too engineered. In other words, there is no need for such a lengthy evaluation of this finding. If the authors start with a new attack strategy to achieve backdoor transferability it should be more solid. e.g. poisoning of a single language can significantly improve the attack transferability." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a novel attack demonstrating that backdoor attacks can transfer across multiple languages in multilingual LLMs, even when only a small fraction of instruction-tuning data is poisoned in one or two languages." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024tuba,\ntitle={Tu{BA}: Cross-Lingual Transferability of Backdoor Attacks in {LLM}s with Instruction Tuning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x6YSsKYJuH},\nnote={under review}\n}" }, "abstract": { "value": "The implications of backdoor attacks on English-centric large language models (LLMs) have been widely examined — such attacks can be achieved by embedding malicious behaviors during training and activated under specific conditions that trigger malicious outputs. Despite the increasing support for multilingual capabilities in open-source and proprietary LLMs, the impact of backdoor attacks on these systems remains largely under-explored. Our research focuses on crosslingual backdoor attacks against multilingual LLMs, particularly investigating how poisoning the instruction-tuning data for one or two languages can affect the outputs for languages whose instruction-tuning data was not poisoned. Despite its simplicity, our empirical analysis reveals that our method exhibits remarkable efficacy in models like mT5 and GPT-4o, with high attack success rates, surpassing 90% in more than 7 out of 12 languages across various scenarios. Our findings also indicate that more powerful models show increased susceptibility to transferable cross-lingual backdoor attacks, which also applies to LLMs predominantly pre-trained on English data, such as Llama2, Llama3, and Gemma. Moreover, our experiments demonstrate the high transferability of the proposed attack: 1) the backdoor mechanism successfully operates in cross-lingual response scenarios across 26 languages, achieving an average attack success rate of 99%, and 2) the proposed attack remains effective even after defenses are applied. These findings expose critical security vulnerabilities in multilingual LLMs and highlight the urgent need for more robust, targeted defense strategies to address the unique challenges posed by cross-lingual backdoor transfer." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "backdoor attacks", "cross-lingual transfer", "LLMs" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/94ea59618a4d9e7d79cd9665d249c9c71dc04227.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/6a79ee83cc6be9191de6862c69641e589b25f3c0.zip" }, "title": { "value": "TuBA: Cross-Lingual Transferability of Backdoor Attacks in LLMs with Instruction Tuning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x7NbaU8RSU
TurboRAG: Accelerating Retrieval-Augmented Generation with Precomputed KV Caches for Chunked Text
main
Active
Retrieval-Augmented Generation; Large Language Models; Precomputed KV Cache
foundation or frontier models, including LLMs
5;5;6;8
4;4;3;3
2;3;3;3
2;2;3;3
3;2;2;3
6
3.5
2.75
2.5
2.5
-0.816497
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. First of all, it is unfair to use GPT-4o for a non-English benchmark because GPT-4o is predominantly trained for the English language. Second, why is the TFLOP, TTFT not reported for the RGB English dataset?\n\n2. Why is gpt-4o's baseline not reported for the LongBench multi-doc QA datasets? I think there are plenty of space left to include these results.\n\n3. Can the authors address the potential data contamination from its fine-tuning dataset to the 5 datasets used in the evaluation section (4 from long bench and 1 from RGB). Musique, wikimqa and hotpotqa are primarily based on wikipedia articles, and many datasets used in Table 6 are also based on wikipedia articles. Also, HotpotQA is included in table 6, why is it also used for testing?\n\n4. I think the paper would benefit if the authors show this turboRAG technique is applicable to other LLMs, such as the more recent state-of-the-art Llama3.1 8B and 70B.\n\n5. Currently, there is no link to code, model checkpoints and fine-tuning data. This raises issue of reproducibility." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper introduces a new computational technique that would speed up the inference time of an open-sourced RAG system. The paper makes the observation and subsequently the assumption that cross-attention between documents during inference is unnecessary. To avoid doing so would reduce significantly computation during inference. Therefore, this computational technique can be viewed as an attention-approximation. This assumption seems to hold for current RAG benchmarks such as RGB and LongBench multi-doc QA. This paper is written with sufficient motivation , and is able to explain its proposed computational technique with clarity. This paper identifies two technical problems with naively concatenating pre-computed kv caches, i.e. misrepresentation of positional ids, and a pre-trained LLM would suffer from distributional shift with reordered positional ids. The paper then proceeds to solve these two problems by using either composite or reordered positional ids, and then fine-tune a pre-trained LLM. In the experiment section, the paper shows that on RGB benchmark, turboRAG is only slightly worse than gpt-4o based naive RAG, and is significantly better than Qwen-7B based naive RAG. On 4 datasets from LongBench, this paper shows that turboRAG achieves 9x speedup and significant inference-time resource saving over Qwen-7B based naive RAG." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a kv caching computational technique that would significantly speed up (during inference time) the conventional RAG system in terms of time to first token (which would reduce user's wait time), and significantly reduce inference-time TFLOPs. In order to implement this technique, one would need to precompute the kv caches for all documents in a database. The paper makes the assumption that cross-attention between different retrieved documents are unnecessary, setting them to zero, which makes this computation technique an attention-approximation method. The paper reorder positional ids for the precomputed kv caches of the retrieved documents from a database, and using a fine-tuned LLM, it is able to achieve overall comparable performance on the RGB Benchmark as a gpt-4o powered conventional RAG system with some performance degradation in high-noise setting.\nThis computational technique is applicable to the following resource-constrained setting: an organization that has large storage space to store all pre-computed kv caches, has the hardware to fine-tune large language models (>=7B), has a preference to use open-sourced LLM (that use relative positions as positional embeddings) as generator rather than commercially available ones such as GPT-4o, whose user query instances and documents in database satisfy the assumption that cross-attention between documents are unnecessary for answer user queries, and is concerned about optimizing inference time." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. insufficient related work discussion. TurboRAG essentially uses an attention-approximation technique that assumes no cross-attention between different documents, this falls in line with many other work that manipulate and attempt to utilize the sparsity in attention maps. But few work along this line is mentioned and compared with the computational trick in turboRAG to convince me that it is a novel approach. This paper also reorders positional ids in RoPE, there are also many similar works that manipulate positional ids to various gains, such as efficiency in long-context inferencing, no such work is mentioned and compared with the positional id reordering trick used in the paper to convince me that is is a novel approach. Therefore, I would say the novelty and originality of this work is moderately low. \n\n\n2. I am concerned about the limitations TurboRAG imposes on the research direction of RAG. As I wrote in the summary section, TurboRAG requires model fine-tuning which rules out most commercial LLMs. TurboRAG also requires a significant amount of offline processing and storing all precomputed kv caches, this is generally not feasible with real-life database. This paper does not discuss storage cost and time for offline kv computation. Lastly, the assumption made in paper (cross-attention between documents are unnecessary) would limit future work that adapts this technique and prevent them to explore harder, more challenging and more meaningful RAG tasks where the user query requires a careful synthesis of information in different documents, such as scientific document summarization, meta analysis generation etc. In fact, while some current benchmarks (5 explored in this paper, 4 from long bench and RBG) do not require information synthesis between different retrieved documents, this limitation in benchmarking resources should not be the cause to make provincial and limiting assumptions that would negatively impact future research efforts in RAG. FYI no limitation section is included in this work even though there is almost one spare page. Because of these limitations, I would incline to reject this paper in its current version. \n\n\n3. I have some questions on the experiment evaluation sections that I will reserve to the Questions Section." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Could the authors provide an analysis of the storage requirements for TurboRAG compared to conventional RAG systems? This information would help readers understand the trade-offs involved.\n\n2. How does TurboRAG impact memory usage during inference, especially for scenarios with many retrieved documents? An analysis of memory consumption would provide a more complete picture of the system's efficiency.\n\n3. How does TurboRAG compare to other recent optimization techniques for RAG systems or long-sequence processing, such as efficient attention mechanisms (e.g., Performer, Reformer) or other RAG caching and optimization strategies?\n\n4. how does TurboRAG scale with increasing document collections or query complexity?\n\n5. The paper focuses on a specific LLM architecture. Have the authors tested or considered how TurboRAG might apply to other popular LLM architectures? This information would be valuable for understanding the broader applicability of the approach." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "*Originality:*\n1. Introduces TurboRAG, a novel approach to accelerate Retrieval-Augmented Generation (RAG) systems by precomputing key-value (KV) caches for document chunks offline.\n2. Proposes innovative techniques to handle attention masks and position IDs to maintain model accuracy while using precomputed KV caches.\n3. Redesigns the RAG inference paradigm by transforming the online computation of KV caches for retrieved documents into a hybrid offline-online process.\n\n**Quality:**\n1. Provides a comprehensive experimental evaluation across multiple benchmarks, including RGB and LongBench.\n2. Demonstrates significant performance improvements, achieving up to 9.4x speedup in time-to-first-token (TTFT) compared to conventional RAG systems.\n3. Conducts thorough regression tests to ensure the proposed modifications do not negatively impact the model's general capabilities.\n4. Presents detailed ablation studies on different configurations (TurboRAG-composite and TurboRAG-reordered) and analyzes their performance under various noise ratios.\n\n**Clarity:**\n1. Well-structured paper with clear sections outlining the problem, methodology, and experimental results.\nIncludes informative figures (e.g., Figure 1 and Figure 2) that effectively illustrate the differences between standard RAG and TurboRAG approaches.\n2. Provides clear explanations of technical concepts, such as the attention mask matrix and position ID rearrangement.\nSignificance:\n3. Addresses a critical performance bottleneck in RAG systems, potentially enabling their application in latency-sensitive scenarios.\nAchieves substantial improvements in TTFT without compromising accuracy, which could have broad implications for real-world RAG applications.\n4. Proposes a method that is applicable to most existing large language models without requiring modifications to the models or inference systems.\n5. Reduces computational resource utilization during online inference by 98.46% compared to standard RAG, significantly increasing the maximum supported batch size and enhancing throughput.\n\nOverall, the paper presents a novel and significant contribution to the field of RAG systems, offering a well-executed and clearly explained approach to improve their performance while maintaining accuracy. The potential impact on real-world applications and the broader applicability of the proposed techniques add to the paper's significance in the field of natural language processing and information retrieval." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "1. This paper introduces TurboRAG, a novel approach to improve the performance of Retrieval-Augmented Generation (RAG) systems without sacrificing accuracy. \n2. A new pipeline that decomposes the prefill stage of conventional RAG systems into offline and online phases, significantly reducing the overhead of key-value (KV) cache computation.\n3. Techniques to handle attention mask and position IDs to maintain model accuracy, including:\n\n\n a. Independent attention between document chunks. \n b. Rearranged position IDs for concatenated KV caches\n\n\n5. A fine-tuning approach to adapt language models to the new attention and position ID patterns.\n6. Substantial improvement in time-to-first-token (TTFT) performance, achieving up to 9.4x speedup (8.6x on average) over state-of-the-art multi-document QA benchmarks without compromising accuracy.\n7. The authors demonstrate that TurboRAG maintains comparable accuracy to conventional RAG systems on document QA tasks, even under high-noise conditions. They also show that the approach does not significantly impact the model's general capabilities across various tasks.\n8. TurboRAG's key innovation lies in precomputing and storing KV caches for document chunks offline, then directly utilizing these caches during online inference. This approach significantly reduces computational overhead and improves inference efficiency, particularly for applications with strict latency requirements.\n9. The paper provides experimental results on multiple benchmarks, including RGB and LongBench, to validate the effectiveness of TurboRAG in terms of accuracy and performance. The authors also discuss the impact on batch size scaling and overall system efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Limited exploration of trade-offs:**\nThe paper focuses primarily on the benefits of TurboRAG but does not thoroughly explore potential drawbacks or limitations. For instance: The authors do not discuss the storage requirements for precomputed KV caches, which could be substantial for large document collections. There's no analysis of how TurboRAG might impact memory usage during inference, especially for scenarios with many retrieved documents. A more balanced discussion of these trade-offs would provide a clearer picture of TurboRAG's applicability in different settings.\n\n **Limited comparison to other optimization techniques:** The paper primarily compares TurboRAG to a naive RAG implementation. However, it doesn't extensively compare against other recent optimization techniques for RAG systems or long-sequence processing, such as Efficient attention mechanisms (e.g., Performer, Reformer) and Other caching strategies or optimization approaches for RAG systems . A broader comparison to other RAG optimization approaches in addition to native RAG and also to other LLM architectures would help contextualize TurboRAG's contributions within the current state of the art.\n\n **Limited discussion of scalability:** The paper demonstrates impressive speedups, but doesn't extensively discuss how TurboRAG scales with increasing document collections at scale or query complexity. Additional experiments or analysis on scalability would strengthen the paper's claims about TurboRAG's broader applicability." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- As referred to in the weaknesses, how long must the retrieved prompt be in order to offset the latency created by moving the KV cache from the CPU to the GPU? It seems that it went from 10x to 4x with only a difference of around 4k tokens.\n- Did you have any experiments on using the KV cache representations to enhance retrieval itself? It seems awkward to have to encode all documents with two models offline.\n- Is there any reason why you would expect the Naive RAG model would underperform Turbo reordered RAG model in english LongBench QA tasks? This seems very strange to me.\n- The fact that HotpotQA and DuReader are part of the fine-tuning data to enable TurboRAG and also the main experimental setting is makes it harder to tell if the method can truly generalize. Are we at least fully certain that there is no data leakage?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is well written and clear.\n- Allowing LLMs to pre-compute the KV cache of a document for later use could significantly speed-up RAG applications and reduce their computational requirements.\n- Their fine-tuning methodology is simple, intuitive and apparently quite effective in allowing models to leverage independently retrieved KV-cache information.\n- Their approach leads to no performance degradation in tasks outside of RAG." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors of this paper propose the TurboRAG framework, which allows documents to be encoded offline by an LLM and used later for any retrieval augmented generation task. Their main contribution is the idea that a model can be fine-tuned in order to enhance its robustness to missing sections of the KV-cache, since the retrieved document KV-caches will be independent of each other (retrieved documents will not be able to attend to each other directly). This allows for a 5 to 10 times speedup in terms of the \"time-to-first-token\" without any significant performance degradation in both RAG and standard generation tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- It is unclear whether the efficiency improvements are significant when the retrieved documents are short (efficiency improvements are only measured with 8-16k tokens. I believe that most RAG settings will be mostly working with shorter prompts than that. For example, the supporting passages in multi-hop QA datasets within LongBench are actually only around 200 tokens each.\n- The metrics presented in the results sections are not well specified.\n- The table column and row titles are, especially for Table 2 and 3, need to be properly capitalized and formatted." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is well-written and easy to follow, making the proposed techniques accessible to readers.\n\n2. The proposed pipeline for RAG demonstrates a significant decrease in time to first token (TTFT), showcasing practical improvements in efficiency." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes caching key-value (KV) pairs for all documents within a Retrieval-Augmented Generation (RAG) system to reduce the overhead of KV cache computation during inference. Additionally, it introduces simple techniques for reordering position embeddings to optimize performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the idea of precomputing the KV cache to reduce TTFT is effective, it is not particularly novel. Prior work, such as [1], has already explored precomputing KV caches to achieve similar objectives. Although this paper mentions RAGCache, it lacks a thorough discussion that differentiates the proposed methods from existing approaches. Including a detailed comparison with RAGCache in both methodology and experimental results would strengthen the paper’s contribution.\n\n2. While precomputing document KV caches can effectively reduce TTFT, it increases storage costs, as each document needs a separate set of KV caches for different models. It is important for the paper to mention this storage issue so that readers can understand the potential trade-offs involved.\n\n[1] RAGCache: Efficient Knowledge Caching for Retrieval-Augmented Generation" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024turborag,\ntitle={Turbo{RAG}: Accelerating Retrieval-Augmented Generation with Precomputed {KV} Caches for Chunked Text},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x7NbaU8RSU},\nnote={under review}\n}" }, "abstract": { "value": "Current Retrieval-Augmented Generation (RAG) systems concatenate and process numerous retrieved document chunks for prefill which requires a large volume of computation, therefore leading to significant latency in time-to-first-token (TTFT). To reduce the computation overhead as well as TTFT, we introduce TurboRAG, a novel RAG system that redesigns the inference paradigm of the current RAG system by first pre-computing and storing the key-value (KV) caches of documents offline, and then directly retrieving the saved KV cache for prefill. Hence, online computation of KV caches is eliminated during inference. In addition, we provide a number of insights into the mask matrix and positional embedding mechanisms, plus fine-tune a pretrained language model to maintain model accuracy of TurboRAG. Our approach is applicable to most existing large language models and their applications without any requirement in modification of models and inference systems. Experimental results across a suite of RAG benchmarks demonstrate that TurboRAG reduces TTFT by up to 9.4x compared to the conventional RAG systems (on an average of 8.6x), but reserving comparable performance to the standard RAG systems." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Retrieval-Augmented Generation; Large Language Models; Precomputed KV Cache" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b32eddd973d058edb6548701e90c43f983ca460c.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "TurboRAG: Accelerating Retrieval-Augmented Generation with Precomputed KV Caches for Chunked Text" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x7Q0uFTH2a
Weak Bisimulation Metric-based Representations for Sparse-Reward Reinforcement Learning
main
Active
Deep reinforcement learning;Weak bisimulation metric;Representation learning;Sparse reward
reinforcement learning
1;3;5;6
5;4;4;2
1;2;2;3
2;2;2;3
1;3;2;3
3.75
3.75
2
2.25
2.25
-0.866154
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "What is the specific meaning of the mean $\\mu_c$ in the Eq.(5)? It appears that the mean $\\mu_c$ is merely a constant without practical significance. And how can it be ensured that the weak bisimulation metric effectively extracts task-relevant features?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The introduction presents a compelling motivation and provides a clear background explanation.\n\nThe proposed weak bisimulation metric is a straightforward yet effective approach, and it is easily adaptable to other bisimulation-based algorithms." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a weak bisimulation metric in sparse reward settings. Compared to the previously strict bisimulation metric methods, the weak bisimulation metric introduces two primary enhancements: (1) it relaxes the reward difference term through a trainable Gaussian distribution, alleviating potential representation collapse caused by sparse rewards; (2) it strengthens the extraction of equivalent task features by accumulating state transition distribution differences accordingly. Experimental results on DMC, Meta-World, and Adroit demonstrate that, the weak bisimulation metric indeed improves performance in sparse reward tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**1. Insufficient Coverage of Related Work.** \n\nThe current manuscript lacks adequate discussion of prior work related to bisimulation representation collapse in sparse reward tasks. Notable approaches, such as constructing bisimulation metrics through intrinsic rewards and inverse dynamics [1] or adopting action-based bisimulation to eliminate reward dependency [2], are absent. To strengthen the comparison, I recommend that the authors clarify how their proposed method differs from or improves upon these existing techniques. Specifically, they could discuss the relative advantages of their approach compared to the intrinsic reward method in [1] and the action-based method in [2].\n\n**2. Errors and Ambiguities.** \n\nThis paper contains several errors in expression and ambiguities that could affect clarity and accuracy. For example, “−” is used instead of “+” in Eq. (5) and Eq. (17). Additionally, in line 402, the statement \"We choose complex *walker_run*, *walker_walk*, *reacher_hard*, and *quadruped_run* tasks with sparse reward properties\" incorrectly categorizes *Walker* and *Quadruped* as sparse reward tasks, which are typically considered dense reward tasks as per [3]. To improve clarity, I suggest that the authors carefully review these equations and statements. It may also be helpful for them to provide clarification if their categorization of tasks differs from the standard definitions in [3], to ensure readers understand any intentional deviations from established terminology.\n\n**3. Experimental Design and Comparison Issues.** \n\nThe experimental setup has several issues:\n\n(1)The selection of comparative algorithms lacks sufficient representativeness. DrQ-v2 and DrM, while valuable, are not RL algorithms specifically designed to use bisimulation for extracting task-relevant representations. It would strengthen the study to include additional bisimulation-focused algorithms, such as MICo [4] and RAP [5], as well as other methods addressing bisimulation in sparse reward tasks (e.g., [1] and [2]). I suggest that the authors clarify their rationale for the current baseline selection and consider a broader comparison with bisimulation-based approaches to provide a more comprehensive evaluation.\n\n(2)In Figure 4, It is unfair to compare SRL (build upon DrQ-v2) proposed in this paper with DBC, as DrQ-v2 demonstrates significantly superior performance than DBC.\n\n(3)In the selection of experimental tasks, it is necessary to include experiments with a noisy background. Additionally, a more in-depth qualitative analysis of the relationship between introduced redundant information and interference from noisy backgrounds is required.\n\n(4)This paper would benefit from ablation experiments to analyze the contributions of individual modules. Specifically, an ablation study comparing the model's performance with and without the Gaussian distribution term in the weak bisimulation metric could clarify its impact and importance within the framework.\n\n**4. Reliance on Assumption A.1.**\n\nThe paper’s theoretical results rely heavily on Assumption A.1, which presumes that the sparse-reward expectation remains less than or equal to a sufficiently small constant $C_1$. However, the assumption lacks precise quantification, particularly regarding what constitutes a “sufficiently small” constant, potentially limiting the applicability of the results. I recommend that the authors either provide empirical evidence from their experiments to support the validity of this assumption or discuss the possible implications if this assumption does not hold in certain environments.\n\n[1]Kemertas M, Aumentado-Armstrong T. Towards robust bisimulation metric learning. NeurIPS, 2021.\n\n[2]Rudolph M, Chuck C, Black K, et al. Learning Action-based Representations Using Invariance[C]. RLC, 2024.\n\n[3]Tassa Y, Doron Y, Muldal A, et al. Deepmind control suite[J]. arXiv preprint arXiv:1801.00690, 2018.\n\n[4]Castro P S, Kastner T, Panangaden P, et al. MICo: Improved representations via sampling-based state similarity for Markov decision processes[C]. NeurIPS, 2021.\n\n[5]Chen J, Pan S. Learning representations via a robust behavioral metric for deep reinforcement learning[J]. NeurIPS, 2022." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What is the formal definition of $\\mathbb{M}$ in Theorem 3.1? What is the definition of $\\pi$-simulation and what is a least fixed point?\n2. Can you explain the logic in line 234 and 235? Theorem 4.1 only reasons about the sup over all states rather than some specific state pair.\n3. By introducing a Guassian random variable in the definition 4.2, the distance can now be negative. Besides, when $s_i=s_j$, this metric does not yield zero, not even on average. Why does it make sense\n4. I am assuming scalability is the ability to remain effective when performing on more complex tasks. In what sense is SRL scalable?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Most part of this paper is easy to read and follow.\n2. The experiment results shows that the resulting algorithm is consistently better than baselines." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the problem of representation learning in deep reinforcement learning. It points out the fundamental challenges of previous (approximations of) bi-simulation metrics. It proposes SRL, which relaxes bi-simulation metric, aiming at solving the intractable reward difference and collapse in the sparse reward setting. Furthermore, it considers continuous differences over the transition distribution to tighten the metric. Finally, experiments are conducted to show the advantage of this algorithm." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The writing about some concepts can be improved. While I understand bi-simulation is not new. Detailed definitions of relevant concepts should be provided, at least in the appendix. For example one would not be able to understand Theorem 3.1 if one does not read other papers. Besides, definition 4.2 is also not strict as it contains random variable in the definition of a deterministic map by default. This makes subsequent statements unclear as well.\n2. If I am understanding correctly, this paper contains overstatements on theorems. See question 2.\n3. Definition 4.2 seems unnatural to me, see question 3. More explanations should be provided\n\nTypos:\n1. In definition 4.2, the minus sign should be plus." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Why is the assumption A.1 not included in the main text, before the theorems that rely on them are presented?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- The empirical results presented in Sec. 5 seem encouraging." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a \"reward-free\" bisimulation-like metric to tackle the known representation collapse issue of bisimulation metrics in sparse reward environments. The expected-reward difference term in the $\\pi$-bisimulation metric is replaced by a noise distribution and the transition distribution distance term is replaced via unrolling by a T-step discounted sum of future distances. Experiment results are shown over 3 pixel-based control suites against a few baseline algorithms." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The presentation seems too sloppy to me. The paper is littered with vague, unclear or incorrect statements that make it hard to read. \n2. The introduction section is too dense and particularly lacking in good use of scientific language.\n\n3. Some examples to the above two points:\n- L13: \"possess the superiority\"\n- L19 \"intractable reward difference\": Monte Carlo estimation makes this quite tractable.\n- L23: \"pure distribution internally\": unclear what this means.\n- L52: why \"fundamentally\"?\n- L53: \"equivalent\" task-relevant features: equivalent how?\n- L75: \"certain favorable properties\"?\n- L78: \"effective relaxation and strengthening in specific aspects\"?\n- L160: stacking frames does not lift partial observability\n- L163: why is the reward a function of $\\phi_\\omega(s_t)$ and not $s_t$?\n- L216: \"In optional $d_\\pi$\": unclear what is being said here.\n- L240: \"But seriously,...\"\n- etc.\n\n4. The motivation behind the particular alterations to the well-studied bisimulation formulation seem arbitrary and poorly motivated, sounding more like a marketing pitch than scientifically rigorous ideas. It is unclear to me how task relevance, which inherently depends on the reward function, is maintained when the reward difference term of bisimulation is replaced by some Gaussian noise arbitrarily.\n5. The method is called \"Scalable\" Representation Learning, but it isn't clear to me how the scalability of the proposed approach is any different from prior work, e.g., MiCO by Castro et al. (2022).\n6. The T-step unrolling of the transition distance seems like a hack and seems overly susceptible to high modelling errors due to compounding.\n7. The identification of the representation collapse issue of $\\pi$-bisimulation in sparse reward environments (the central focus of this paper) and Theorem 4.1 is incorrectly attributed to Liao et al. (2023), when in fact this was first studied by Kemertas et al. (2021); see their more general Lemma 2 with $c_R=1, c_T=\\gamma$. The latter work is not mentioned until the end of Sec. 4.3 in Page 7.\n8. Experiments do not compare to Kemertas et al. (2021), whose modifications of DBC (addition of embedding normalization, intrinsic rewards and inverse dynamics regularization) substantially improved performance, especially in sparse reward environments and would therefore comprise a stronger baseline. \n9. L395: unclear why \"challenging size of 2e5\" is selected as the replay buffer size. This may be unfairly disadvantaging baseline methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Are there cases where the Gaussian assumption might not hold, and what alternatives would work better in those situations?\n\nCould the performance of SRL be further improved by considering more complex forms of the transition distribution (e.g., non-Gaussian distributions), or is the flexibility provided by the Gaussian noise sufficient for most sparse-reward settings?\n\nHow does SRL compare with methods designed for dense-reward tasks? While the paper focuses on sparse-reward environments, could the approach offer any advantages in settings where rewards are more frequent?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The introduction of a weak bisimulation metric that relaxes strict assumptions while maintaining bisimulation's theoretical advantages is a novel and important step forward. The experiments are extensive, covering a variety of challenging tasks across multiple domains (DMControl, MetaWorld, Adroit), and the proposed SRL consistently outperforms baseline methods, including state-of-the-art approaches. The paper is generally well-structured, with a clear explanation of the problems with traditional bisimulation metrics and a well-motivated introduction of the weak bisimulation metric." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a scalable representation learning approach (SRL) designed to improve the stability of representations under sparse-reward settings in reinforcement learning (RL). The core contribution is the introduction of a weak bisimulation metric, which relaxes traditional bisimulation metrics by eliminating the reliance on reward differences and incorporating a trainable Gaussian distribution. This relaxation is intended to address the challenges of representation collapse and degeneration that traditional bisimulation metrics face in sparse-reward environments. Additionally, the approach introduces continuous differences in the transition distribution to enhance task-relevant feature extraction. The proposed SRL is empirically validated on several sparse-reward RL benchmarks, such as DeepMind Control Suite, MetaWorld, and Adroit tasks, where it outperforms state-of-the-art methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper assumes that Gaussian noise can effectively relax reward differences, but the implications of this assumption could vary across different environments. More detailed ablation studies that explore different noise distributions or parameterizations might provide further insights into the generalizability of the approach." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "To overcome bisimulation metrics' unstable representations in sparse reward settings, we present a weak bisimulation metric-based scalable representation learning approach for deep reinforcement learning, which outperforms SOTA baselines." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024weak,\ntitle={Weak Bisimulation Metric-based Representations for Sparse-Reward Reinforcement Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x7Q0uFTH2a},\nnote={under review}\n}" }, "abstract": { "value": "Recent studies have shown that bisimulation metrics possess the superiority of essentially extracting the features related to reinforcement learning tasks. However, limited by strict assumptions and the inherent conflict between metrics and sparse rewards, they suffer from serious representation degeneration and even collapse in sparse reward settings. To tackle the problems, we propose a reward-free weak bisimulation metric-based scalable representation learning approach (SRL). Specifically, we first introduce the weak bisimulation metric, which bypasses the intractable reward difference, instead leveraging a trainable Gaussian distribution to relax the traditional bisimulation metrics. Particularly, the Gaussian noise creates a flexible information margin for the metric optimization, which mitigates potential representation collapse caused by sparse rewards. Additionally, due to its pure distribution internally, the metric potentially mitigates representation degeneration resulting from inconsistent computations under strict assumptions. To tighten the metric, we accordingly consider continuous differences over the transition distribution to enhance the accuracy of the initial transition distribution difference, strengthening the extraction of equivalent task features. We evaluate SRL on challenging DeepMind Control Suite, MetaWorld, and Adroit tasks with sparse rewards. Empirical results demonstrate that SRL significantly outperforms state-of-the-art baselines on various tasks. The source code will be available later." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Deep reinforcement learning", "Weak bisimulation metric", "Representation learning", "Sparse reward" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/782de28076e77056f2a57acc702ad3d38e314c7f.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Weak Bisimulation Metric-based Representations for Sparse-Reward Reinforcement Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x83w6yGIWb
Beware of Calibration Data for Pruning Large Language Models
main
Active
calibration data;post-training pruning;large language models
other topics in machine learning (i.e., none of the above)
1;5;6;8
5;3;3;5
1;3;3;4
1;3;2;3
2;2;3;3
5
4
2.75
2.25
2.5
-0.196116
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- What are the main differences between this work and the work by [1]?\n\n- The authors say that \"We can clearly observe that the self-generated synthetic data has higher Min-50%++ scores than the other calibration data. It indicates that the self-generated synthetic calibration data is indeed similar to the training data, confirming the validity of\nusing self-generated data as a proxy for the training data.\". The conclusion is not entirely clear to me, can you explain how to conclude that synthetic calibration data is similar to the training data in this figure?\n\n- While the paper aims to enhance general capabilities, the impact of using domain-specific calibration data for pruning models intended for specialized tasks remains unclear. do the authors have any intuition for that?\n\n[1] Miles Williams and Nikolaos Aletras. On the impact of calibration data in post-training quantization\nand pruning. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Proceedings of the\n62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),\npp. 10100–10118, Bangkok, Thailand, August 2024. Association for Computational Linguistics.\nURL https://aclanthology.org/2024.acl-long.544." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper effectively challenges the common assumption that post-training pruning methods are robust to the choice of calibration data. Recognizing the challenge of inaccessible training data, the paper introduces a \"self-generating then sampling\" strategy for constructing suitable calibration data. The paper provides a detailed examination of various aspects related to the self-generating calibration data strategy" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the role of calibration data in post-training pruning for large language models (LLMs). The authors find that calibration data similar to the training data yields better performance when pruning LLMs for model compression. As many training datasets for LLMs are inaccessible, the authors propose a strategy to create synthetic calibration data, which outperforms commonly used datasets in experiments. This strategy involves generating synthetic text using the LLM and then filtering out low-quality data. This synthetic data is more similar to the training data and ultimately leads to better performance for pruned LLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the paper shows a correlation between training data similarity and pruning performance, it doesn't explain why this connection exists. The paper's evaluation primarily centers on overall model performance. Investigating how calibration data affects the pruning of individual model components like attention heads or specific layers could be beneficial. This granular analysis would offer a more complete picture of how calibration data impacts different parts of the LLM." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Suggestion (I): decompose or improve the table to highlight matching or exceeding the performance of using calibration data from the actual training set and exceeding the performance compared to calibration datasets belonging to other distributions. \n\nSuggestion (II): avoid redundancy in repeating the literature review and possibly summarize the questions in the introduction. In the literature review, the name of the technique corresponding to each citation could be mentioned as well.\nSuggestion (III): improve the abstract to better reflect the outcomes of the paper and be easier to read.\nSuggestion (IV): Mention somewhere that the paper will first proceed by answering the calibration data related questions and then propose a new a novel technique for its generations. Typically, one expects the main novel contribution to come first." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper productively expands on prior work to answer unanswered follow up questions related to the influence of calibration data on pruning and delivers insightful findings through a set of reliable experiments.\nIt proposes a novel and intuitive approach for the synthesis of calibration data and evaluates it empirically and theoretically while experimentally justifying major hyperparameter choices. They show that the approach can improve by up to 2.6% over using an out-of-distribution calibration dataset.\nThe paper also clearly describes background, relevant pruning approaches, the problem statement and proposed approach for calibration data synthesis as well as experimental results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Large language models with numerous parameters substantially increase deployment and inference complexity and costs. To mitigate this, post-training parameter pruning can be used which exploits the fact that neural networks are often over-parametrized. It operates selectively removing redundant parameters while aiming to preserve performance as measured using a sample of calibration data.\nThe key contributions of this paper are: (i) a (plausibly) novel data synthesis strategy for calibration data, and (ii) an investigation into the effects of size, quality, and distribution of calibration data, across different pruning hyperparameters.\nAdditionally, the paper examines major hyperparameter choices within their strategy and perform additional analysis to show that their synthesis method generates data that is distributed similar to the training data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main results are not so well represented. In Table 2, the proposed calibration data synthesis approach frequently falls behind other sources of calibration data. It’s not highlighted in the table (e.g., using colors or otherwise) whether each source was present in the training set of the evaluated LLM. That is, it makes sense to have separate comparisons for the proposed approach with each of (i), data the model was not trained on and (ii), data the model was trained on, but these seem to be mixed up in one table making it hard to interpret the quality of the results by looking at the table. The statement “Overall, our self-generated synthetic calibration data outperforms other baseline calibration data in language modeling and commonsense reasoning tasks and…” is not well justified because the remaining of the paragraph focuses on Wikipedia and C4 and its not obvious from the table that it outperforms all sources consistently over all tasks. \n\nThe paper involves some redundancies. For instance, the introduction as well as background seem to closely repeat the literature review. The questions are mentioned in the introduction then later again in section 3. Moreover, the choice of words in some of the sentences used is inadequate. For instance, the use of “value more” in “We fill this blank and surprisingly observe that the effects of calibration data even value more than designing advanced pruning strategies.” Take note as well that the paper does not convey that this “values more” than designing more advanced pruning strategies and that’s nontrivial to prove. Constructs such as “while different calibration data’s impact on pruning performance still lacks systematical exploration.” also make the abstract harder to read compared to if it was something like “while the impact of calibration data used has been…”." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Where does Figure 6 reflect the results of magnitude-based pruning?\n2. Are the conclusions and method presented in this paper applicable to LLM quantization?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. This paper introduces a criterion and construction strategy for choosing calibration data in post-training pruning, supported by extensive experimental validation.\n2. The authors conduct experiments on various LLMs and pruning methods, with multiple repetitions, to eliminate the effects of randomness.\n3. The paper is well-organized, clearly presenting the empirical studies, methodology, experiments, and results, making it easy for readers to follow the authors' arguments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the impact of calibration data in the post-training pruning of LLMs, which shows that calibration data significantly affects pruned model performance as pruning difficulty increases, surpassing the improvements from advanced pruning algorithms. The authors also find that using training data or data similar to it as calibration data significantly boosts pruned model performance. Since pre-training data is often unavailable for advanced LLMs, the paper proposes a strategy for self-generating calibration data. Extensive experiments on multiple LLMs and pruning methods confirm the effectiveness of the proposed synthetic calibration data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. This paper only conducts experiments on unstructured and semi-structured pruning settings and does not validate the effectiveness of synthetic calibration data in more practical structured pruning.\n2. The synthetic calibration data is not a method first proposed by the authors. A recent work by Shin et al.[1] also proposed synthetic calibration data. However, the authors do not discuss the differences between that work and the others.\n3. This paper only uses data from Wikipedia to generate synthetic data. Why do you not validate the effectiveness of synthetic data generated from other sources?\n\n[1] Shin, Sungbin, et al. \"Rethinking Pruning Large Language Models: Benefits and Pitfalls of Reconstruction Error Minimization.\" arXiv preprint arXiv:2406.15524 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Could the authors provide further clarification on how efficient their proposed calibration data synthesis method is, e.g., what are the minimum data points it needs to generate for calibration?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. **Originality:** In addition to the models included by Williams & Altetras (EMNLP2024), the authors also tested with DCLM-7B. This model is designed to showcase the effectiveness of systematic data curation. They propose a self-generating calibration data synthesis strategy. \n2. **Quality:** The paper provides a systematic exploration, supported by experimental results demonstrating how different calibration datasets affect the performance of pruned models.\n3. **Clarity:** The writing is reasonably clear and easy to follow. The objective is straightforward. \n4. **Significance:** The findings have significant implications for practitioners in the field, although it has been highlighted by previous work already." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the impact of calibration data in the pruning of large language models (LLMs). This work mainly repeats some work that has been done by Williams & Altetras (EMNLP2024), which investigates the impact of calibration data in the pruning and quantization of large language models. The authors present evidence that the quality and type of calibration data can impact pruning performance, at times more so than advanced pruning methods themselves, reflecting the results done by Williams & Altetras (EMNLP2024). They propose a self-generating calibration data synthesis strategy to create effective calibration datasets when access to training data is limited." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Lack of Novel Contribution:** The study builds on important findings by Williams & Aletras (EMNLP2024), and the findings are already been proven and previous work has done further comparing quantization and pruning. \n2. **Lack of downstream tasks experiments:** the authors only consider pruning performance and it does not necessarily reflect the downstream tasks. Previous work done by Williams & Altetras (EMNLP2024) has done a much more comprehensive evaluation of a wide range of downstream tasks. \n3. **No explanation on pruning performance:** The paper primarily evaluates \"pruning performance,\" but fails to provide a clear explanation of this metric. It's unclear whether this refers to pruning error, signal-to-noise ratio (SNR), or another measure. The authors neither explain their calculation method nor cite a source for this metric. \n4. **Experimentation with Diverse Datasets:** The experiments predominantly focus on a narrow range of calibration datasets and models. Including a broader set of datasets could provide more generalizable results and strengthen the conclusions drawn about the effectiveness of their proposed methods.\n5. **Validation or discussion of choices in methods:** There are some variables actually can be potentially impact the results, such as why 5000 samples from the Wikipedia data for generation, and why eliminate the top 20%." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024beware,\ntitle={Beware of Calibration Data for Pruning Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x83w6yGIWb},\nnote={under review}\n}" }, "abstract": { "value": "As large language models (LLMs) are widely applied across various fields, model compression has become increasingly crucial for reducing costs and improving inference efficiency. Post-training pruning is a promising method that does not require resource-intensive iterative training and only needs a small amount of calibration data to assess the importance of parameters. Previous research has primarily focused on designing advanced pruning methods, while different calibration data's impact on pruning performance still lacks systematical exploration. This paper investigates the effect of calibration data on post-training pruning and demonstrates that using calibration data similar to the training data yields better performance. Based on this finding, we propose a self-generating synthetic calibration data strategy to sample suitable calibration data for LLMs in practical scenarios with inaccessible training data. We conduct experiments on the DCLM, LLaMA-2, and LLaMA-3 models, and the results show that the proposed method outperforms commonly used calibration data." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "calibration data", "post-training pruning", "large language models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/612762032ae529c4c51b222e9167ec9df2782d49.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Beware of Calibration Data for Pruning Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x8jxf3byli
TWO STAGES DOMAIN INVARIANT REPRESENTATION LEARNERS SOLVE THE LARGE CO-VARIATE SHIFT IN UNSUPERVISED DOMAIN ADAPTATION WITH TWO DIMENSIONAL DATA DOMAINS
main
Active
domain invariant representation learning;unsupervised domain adaptation;image recognition;signal processing;classification
transfer learning, meta learning, and lifelong learning
1;1;3;5;6
3;5;4;4;3
2;1;2;3;2
1;1;1;2;2
1;1;1;2;2
3.2
3.8
2
1.4
1.4
-0.3669
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The idea of utilizing intermediate data to smooth the domain adaptation process sounds reasonable." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a two-stage domain invariant representation learning method to address large co-variate shifts in unsupervised domain adaptation. The approach uses intermediate, unlabeled data to create smoother transitions between source and target domains, aiming to enhance classification performance under challenging conditions. The authors claim their method outperforms existing UDA models, especially when co-variate shifts are significant." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Poor clarity and organization. The paper is challenging to read, with many grammatical errors, convoluted language and unclear explanations of the methodology.\n2. Lack of rigorous validation: The theoretical claims, especially the effectiveness of two-stage learning and parameter optimization, lack sufficient mathematical justification and empirical support. In line 62, the authors claim that \"intermediate data (unsupervised) between source and target to ensure simultaneous domain invariance between source and intermediate data and invariance between intermediate and final target data\". Doesn't this imply the source and target data are domain invariant as well?\n3. The text is verbose and repetitive, making it difficult to extract key insights and understand the novelty compared to prior methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How would the method perform if an intermediate, unsupervised domain were unavailable or of low quality? Are there alternative approaches, such as synthetic data generation or transfer learning, that could help mitigate this dependency? Could the authors provide guidance on selecting or creating intermediate datasets for cases where a semantically related domain is not readily available?\n\n2. Can the authors provide specific examples or case studies to demonstrate the practical impact of the proposed parameter tuning framework on model performance? Would an experiment isolating the effects of the parameter tuning framework help clarify its practical benefits? If so, could the authors consider adding one to the paper?\n\n3. How does the proposed method address specific limitations of existing UDA techniques, such as domain-adversarial training or correlation alignment? Could the authors provide a comparison experiment or detailed analysis that highlights the advantages and trade-offs of this two-stage approach relative to traditional UDA methods?\n\n4. How does the proposed method handle highly heterogeneous domains where intermediate data is noisy or contains varying domain characteristics?\n\n5. Could the authors include ablation studies to isolate the effects of each stage in the two-stage process? This might help clarify the contributions of each component in achieving domain invariance." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper addresses a critical limitation in UDA by focusing on large co-variate shifts in two-dimensional data domains, which are common in real-world applications such as autonomous driving, human activity recognition." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a two-stage domain-invariant representation learning approach for UDA under large co-variate shifts. It uses intermediate unsupervised data to bridge the gap between source and target domains. Additionally, a theoretical framework is proposed for parameter tuning without requiring target labels." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The main limitation of the proposed method is its dependency on an intermediate, unsupervised domain that is semantically related to both the source and target domains. This dataset may not always be available or feasible to collect, especially in real-world applications with limited resources.\n2. The practical application of the proposed method is not fully demonstrated. The benefits remain largely theoretical, making it hard for readers to grasp its relevance without concrete examples of its impact on model performance.\n3. Lacks a deep discussion that contextualizes how this approach builds upon or diverges from existing UDA methods. An analysis/experiment can be done to show how the proposed method addresses specific limitations in prior approaches, such as domain-adversarial training or correlation alignment techniques." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. If the reverse validation like idea is always effective?\n2. This is more like a training strategy by commenting on step-by-step manner comparing to end-to-end manner." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The two-stage domain adaptation method, i.e. two-stage DANN sound good and interesting.\n2. The reverse validation based idea is interesting and not frequent in domain adaptation community.\n3. The two-stage strategy can be scalable to other method such as CORAL." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a simple two-stage domain adaptation method by feeding source domain, intermediate domain and target domain into the model. The intermediate domain overcomes the large covariate shift problem that is widely a challenge of domain adaptation. This paper further proposes a free parameter indicator with reverse validation strategy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Domain adaptation has undergo a wide study in the past decade. However, with the multimodal large language model, domain gap has been allievated from another way, such as CLIP based [1,2]. Therefore, in this submission, the impact of this proposal may be weak.\n2. There lacks sufficient comparisons to previous SOTAs in recent works [3], particularly large vision-language model based and prompt based [4].\n3. The writting should be further improved for easier reading.\n4. Using intermediate domain as a bridge is not new because there are a wide research in DA with intermediate state [6, 7].\n5. As the algorithms of 2 stage DANN shows, the intermediate domain is required. I thinks how to obtain intermediate domain is still an open question. This is not discussed.\n6. Transformer based DA models should also be discussed since it has been widely used in domain adaptation [8, 9]. \n\n[1] Learning transferable visual models from natural language supervision. In ICML, pages 8748–8763. PMLR, 2021.\n[2] Ad-clip: Adapting domains in prompt space using clip. In ICCV, pages 4355–4364, 2023.\n[3] Gradient Harmonization in Unsupervised Domain Adaptation. In IEEE TPAMI, 2024.\n[4] Domain adaptation via prompt learning. In IEEE TNNLS, 2023.\n[5] Domain-Agnostic Mutual Prompting for Unsupervised Domain Adaptation. In CVPR, 2023.\n[6] Semi-Supervised Domain Generalization with Evolving Intermediate Domain. In PR, 2023.\n[7] Manifold Criterion Guided Transfer Learning via Intermediate Domain Generation. In IEEE TNNSL, 2019.\n[8] Tvt: Transferable vision transformer for unsupervised domain adaptation. In WACV, 2023.\n[9] Safe self-refinement for transformer-based domain adaptation. In CVPR, 2022." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Use cases of the proposed methods are too limited. It was designed for two dimensional data. What if the source and target data are in high dimensional space?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. A new approach was proposed\n2. It provides an automated free-parameter tuning method without needing access to target ground truth labels." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors of this manuscript propose a two-stage domain-invariant representation learning method, which uses semantic intermediate data to bridge the gap between source and target domains. This method improves classification performance even under large covariate shifts by learning domain-invariant features and optimizing task discriminability through source labels. The paper also introduces a theorem for optimizing free parameters by measuring the gap between trained models and target labeling rules. The proposed method outperforms previous UDA techniques across 38 tasks in 4 representative ML datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. English was not used properly:\n a) line 52: \"did not be\"\n b) line 498: \"It can be read that\" sounds awkward.\n ...\nLots of sentences in this manuscript are not authentic, making it hard to follow the manuscript's content. I strongly recommend authors taking times to improve the presentation of this work.\n\n2. No related works section.\n\n3. The x, y axis labels for figure 4-6 are not easily visible." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please refer to weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper introduces a novel and effective solution to a practical challenge in machine learning by using intermediate data to bridge large domain gaps.\n2. The proposed method is versatile as it can be integrated with various domain invariant representation learning techniques.\n3. The authors derive a theorem for measuring the gap between trained models and unsupervised target labelling rules for hyper-parameters searching." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses large co-variate shift problems in UDA, particularly when dealing with two-dimensional co-variate shifts. The proposed method uses an intermediate unsupervised dataset to bridge the gap between source and target domains, learning domain-invariant features simultaneously between source-intermediate and intermediate-target pairs, which helps achieve better domain adaptation compared to direct source-target adaptation. The authors also derive a theorem for measuring the gap between trained models and unsupervised target labelling rules, which helps optimize free parameters without access to target labels. The proposed method is validated on classification 4 datasets including 38 UDA tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThis paper requires substantial improvement in terms of writing quality and clarity of presentation. (1) Many notations are without proper definition, (e.g., D_S, \\hat{y}^S_{i,j}, N appear before being formally introduced); (2) The language is frequently imprecise and contains awkward expressions that hinder understanding. I strongly recommend the authors to thoroughly revise the mathematical notations with clear definitions and improve sentence structures and word choices for greater presentation.\n2.\tThe authors assume the existence and availability of appropriate intermediate domains that perfectly fit their \"two-dimensional domain shift\" framework, but do not adequately address how to identify or construct such intermediate domains in real-world applications. For example, while it is intuitive to have MNIST-M as an intermediate domain between MNIST and SVHN, in most real-world scenarios, it is unclear: (1) How to systematically identify the two dimensions of domain shift; (2) How to obtain or construct suitable intermediate domain data; (3) What to do when clean intermediate domains are unavailable or when domain shifts are more complex than two-dimensional. Without addressing these practical concerns, the proposed method may have limited applicability in real-world domain adaptation problems.\n3.\tThe technical novelty of the proposed method is limited. The approach is essentially a straightforward extension of existing domain-invariant learning methods. It merely splits the domain adaptation loss into two components (L_domain(S,T) + L_domain(T,T')) and applying standard adversarial training techniques, which lacks novel methodological designs in terms of loss function formulation, optimization strategy, or network architecture.\n4.\tThe proposed parameter selection method is incremental. The proposed approach is essentially a straightforward adaptation of the existing Reverse Validation (RV) method to the two-stage domain adaptation setting, without substantial methodological innovations.\n5.\tThe experimental evaluation of the paper is not up to current standards in domain adaptation research. The comparisons are limited to classical UDA methods like DANNs and Deep CORAL, while ignoring numerous recent advanced techniques that have shown significant improvements in handling large domain gaps." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Novel method of domain invariant representation learning for large co-variate shift in unsupervised domain adaptation problem" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024two,\ntitle={{TWO} {STAGES} {DOMAIN} {INVARIANT} {REPRESENTATION} {LEARNERS} {SOLVE} {THE} {LARGE} {CO}-{VARIATE} {SHIFT} {IN} {UNSUPERVISED} {DOMAIN} {ADAPTATION} {WITH} {TWO} {DIMENSIONAL} {DATA} {DOMAINS}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x8jxf3byli},\nnote={under review}\n}" }, "abstract": { "value": "Recent developments in the unsupervised domain adaptation (UDA) enable the unsupervised machine learning (ML) prediction for target data, thus this will accelerate real world applications with ML models such as image recognition tasks in self-driving. Researchers have reported the UDA techniques are not working well under large co-variate shift problems where e.g. supervised source data consists of handwritten digits data in monotone color and unsupervised target data colored digits data from the street view. Thus there is a need for a method to resolve co-variate shift and transfer source labelling rules under this dynamics. We perform two stages domain invariant representation learning to bridge the gap between source and target with semantic intermediate data (unsupervised). The proposed method can learn domain invariant features simultaneously between source and intermediate also intermediate and target. Finally this achieves good domain invariant representation between source and target plus task discriminability owing to source labels. This induction for the gradient descent search greatly eases learning convergence in terms of classification performance for target data even when large co-variate shift. We also derive a theorem for measuring the gap between trained models and unsupervised target labelling rules, which is necessary for the free parameters optimization. Finally we demonstrate that proposing method is superiority to previous UDA methods using 4 representative ML classification datasets including 38 UDA tasks. Our experiment will be a basis for challenging UDA problems with large co-variate shift." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "domain invariant representation learning", "unsupervised domain adaptation", "image recognition", "signal processing", "classification" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/08f5b0073a2cc72926c2ec2bb2c33e49f41ec017.pdf" }, "presentation": null, "primary_area": { "value": "transfer learning, meta learning, and lifelong learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "TWO STAGES DOMAIN INVARIANT REPRESENTATION LEARNERS SOLVE THE LARGE CO-VARIATE SHIFT IN UNSUPERVISED DOMAIN ADAPTATION WITH TWO DIMENSIONAL DATA DOMAINS" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x8mr9zGkpr
Attributing Model Behavior: The Predominant Influence of Dataset Complexity Over Hyperparameters in Classification
main
Active
Model Behavior Attribution;Complexity Meta-features;Hyperparameters;Bias-Variance Decomposition
other topics in machine learning (i.e., none of the above)
1;3;3;5
5;5;4;3
1;2;1;3
1;2;2;2
2;4;2;4
3
4.25
1.75
1.75
3
-0.852803
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I dont have any follow-up questions." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1) the paper is easy to read\n2) The paper confirms a fact that is well know by most data science / ML prectioners, the complexity of the data set matters for classification performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes an analysis of the influence of hyperparameter tuning and training \"data complexity\" (the author call it \"complexity meta-features\") on the performance of two classic classification algorithms:SVMs and Random forests. the paper includes run extensive experiments on 290 OpenML tabular datasets. The author's end with a summary of their findings: dataset complexity matters the most." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) The content, experiments and conclusions of the papers are very outdated. It reads like a paper that was written 15-20 years ago. Most citations are from many years ago. Hence the contribution are practically irrelevant to the current state of ML / data science in 2024. Hence there is no significant contribution or relevance to the ICLR community.\n\n2) Furthermore, the main conclusion of the paper is that dataset complexity (class overlap, dimensionality, etc) matters when training a classifier (RF or SVM). These are well known fact that are thought in introductory ML class and hence there is no new information provided here." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "please check the weakness." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper addresses a gap by directly comparing the impacts of dataset complexity and hyperparameters on model performance. Previous studies have often examined these factors separately, but this research provides a unified framework to assess their relative influence on bias and variance. \n\nThe paper comprehensively considered nearly 300 datasets and over 300 configurations, enabling a more convincing conclusion.\n\nApart from the numerical results, the paper also includes very detailed arguments of why this happens and what this indicates.\n\nSome of the estimated coefficients shown in the OLS summary table do align with our common understanding of how random forests deal with bias-variance tradeoffs, e.g. coefficients associated with min samples leaf, bootstrap, and max features." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores the relative influence of dataset complexity and hyperparameters on classification model behavior, specifically for RF and Kernel SVM. \\\nAuthors utilize the fANOVA framework and OLS to quantify the influence on bias and variance brought by dataset complexity and hyperparameters. \\\nBased on the analysis across 290 datasets and 304 hyperparameter configurations, the study finds that dataset complexity meta-features—such as class overlap, data sparsity, and class imbalance—have a more substantial impact on bias and variance than hyperparameters." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. lack of details on the experiment design. \n\n (1) as the paper claims, nearly 300 datasets of varying sample sizes, response categories, and feature dimensions are used. Why are they comparable? I believe 0-1 loss is not a typical loss function people use for multiclassification problems. And high-dimensional datasets wouldn't react necessarily the same as n>p datasets in terms of hyperparameters. \n \n(2) hyperparameters like C and gamma have huge variations in scale. How is it being included in OLS? Is it logarithmized?\n\n (3) For readers not familiar with meta-features of datasets, it would be very helpful to at least sketch some general ideas of how these meta-features are defined. Are those features immune to data transformation? The same for how fANOVA works. \n\n\n2. lack of details on why the experiments are conducted in such a way. \n\n (1) In my perspective, the number of trees is RF's one of the most important hyperparameters. Why is this not considered?\n \n(2) I believe the neural net is the framework that people are most curious about. The authors also mention it in the introduction. Why is that not considered? \n \n(3) Based on the pymfe package, there are plenty of meta-features that characterize data complexity from different perspectives. Why specifically these 3, N1, T2, C1, are chosen?\n\n3. Some of the results that are confusing to me. \n \n(1) if those meta-features are immune to data transformation, how can we benefit from your research even though we know that data complexity itself is much more important than tuning hyperparameters? if not, shouldn't you include some examples of how bias and variance are reduced after some preprocessing of the data that reduces data complexity? For example, class imbalance issues can be alleviated by reweighing samples or bootstrapping.\n\nIn general, I do agree that the data's quality is much more important than tuning parameters. If the data is always linearly separable, I believe logistic regression would suffice. It's just the data quality is not something we can work on but the model and the model's parameter choice. Please correct me if I am wrong. \n\n (2) Based on your OLS example, the features included are all significant neq to 0. If the trend is determined, does it mean that choosing a smaller c or some certain kernel can always help with the prediction?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "No questions, but recommendations for future submission:\n\n- drop the analysis of dataset complexity vs. hyperparameters; just\n focus on the impact of hyperparameters on the bias/variance\n trade-off. It is sufficiently important topic on its' own. Impact of dataset complexity is not useful because 1) it is well understood that dataset complexity has major impact on classifier performance 2) it is not clear what to do about it.\n\n- include multi-class problems\n\n- include XGBoost and neural networks in the analyses. I don't think\n one can make practically useful conclusions without including those\n state-of-the-art classifiers. I would personally also add logistic\n regression\n\n- consider adding image classification datasets in your analyses. If\n the behavior - in terms of bias/variance, and hyperparameter tuning\n contribution - is quite different from tabular data, that would be a\n valuable result\n\nI think this *could* become a good paper, but not without extensive\nrevision, which is not feasible in the ICLR timeframe.\n\n\nMinor points\n\n3.3\n---\n\n- 2 out of 3 is not really \"generally\". Please just be specific: for\n N1 and T1, higher values indicate greater classification\n difficulty. For C1, lower values indicate greater classification\n difficulty. That would be simpler and easier to read. \n\n \"Higher values of these meta-features generally indicate greater\n classification difficulty (except for C1).\"\n\n4.2.2\n-----\n\n- \"When considering variance, a similar trend emerges. According to\n the fANOVA results (Figure 3b), C1 continues to dominate, accounting\n for 37.78% of the variability in variance.\"\n\n I wouldn't call this a similar trend. For bias, C1 accounts for 71%,\n for variance, 38%. Please point out that the C1 impact on variance is significantly lower than on bias. This suggests other factors play greater role. Discuss what those factors might be." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- extensive experiments\n\n- clearly written, easy to read\n\n- the topic of bias/variance tradeoff is very important" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Authors compare impact of dataset complexity and hyperparameter tuning on the performance of binary Random Forest and SVM classifiers. They perform extensive experiments which support their finding that the dataset complexity has dominant impact on the performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I think the paper sends, in essence, a wrong message to readers.\nAuthors are basically suggesting that hyperparameter optimization\nisn't useful, which I disagree with. As I write this, I am tuning\nhyperparameters of a neural network classifier, and the AUROC has gone\nfrom 0.55 to 0.75, exclusively due to the (gradually improving) choice\nof hyperparameters. The conclusion is at best narrowly limited to SVM\nand RF, but that makes the manuscript 1) not that useful, given\nlimited scope 2) misleading, since many readers may walk away with a\nwrong impression that the conclusions apply generally \n\nTo be clear, not disagreeing with the idea that hyperparameter tuning\nhas a natural limit, and going beyond that may require additional or\ndifferent data. But the paper leaves an impression to the reader that\nhyperparameter tuning doesn't help in general, which I disagree with. At a minimum, the title should say that the results are limited to SVM and RF binary classifiers. \n\nAlso keep in mind that \"optimizing dataset complexity\" is a vague and\nhardly actionable advice. I personally don't quite know how to\noptimize dataset complexity, whereas hyperparameter tuning is well\nunderstood. This should be clearly stated/discussed." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. fANOVA is one of many ways of decomposing model predictions. Why this approach? And are there any potential issues due to taking into account pairwise interactions only?\n2. l. 60: How can normalization or scaling affect the intrinsic complexity of the datasets? Intuitively, wouldn't a reasonable measure of complexity be invariant to these? Of the three in the paper, C1 is invariant. T2 is not, because it depends on PCA, but that just makes me wonder if T2 even makes sense. I wouldn't want my dataset to become more complex, just because I convert a feature from meters to centimeters. N1 is based on a two-sample test, so, while I didn't go into the details of the test, I'd assume that we'd like all of our two sample tests also to be invariant to scaling. ... To clarify, I'm not criticizing the choice of not scaling, we can argue either way. But I am concerned about the use of complexity metrics that are not invariant.\n4.But not normalizing does have an effect on SVM and regularization? This is not how we would apply SVM in practice.\n5. N1 is a bit outdated. Two sample tests have progressed a lot in recent years. In particular, tests based on machine learning models directly or using classification performance as a proxy.\n6. It seems that N1 would fail to be attributed when the dataset is so complicated that RF and SVM can perform well, but the test in N1 doesn't?\n7. The attribution to C1 for SVM is to me the most surprising result in the paper. Any explanations of this difference between SVM and RF? How correlated are N1 and C1?\n8. l. 658: How reasonable is this assumption that there are no hidden confounding factors?\n9. Parameter configurations were generated by sampling uniformly and independently from each hyperparameter range? I'm asking because of the following scenario. Let's say that the optimal range for a parameter is relatively narrow (0 - 0.05), while after that (0.05 - 1.0) the model performs majority class baseline poorly. Because the range of good values is small, this, as a variable in a regression, would not be that important. So, it would not get much of an attribution in the experiment, but it is definitely important in practice. In other words, isn't the importance of a hyperparameter determined by the difference it can make, not by the variability of the performance over some arbitrary set of its values?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Well structured and clearly written paper.\n- A many ways a very comprehensive experiment.\n- Tackles an important issue for ML practitioners." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper focuses on the attribution of predictive model behavior. In particular, it describes a comprehensive empirical comparison of the influence of hyperparameters and dataset meta-features on the bias and variance of classifiers. The analysis utilizes functional ANOVA. The main result is that most of bias and variance can be attributed to dataset characteristics, as opposed to hyperparameters." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I have two main concerns. The first is that the experiments, while in some ways very comprehensive, are in other ways very limited:\n\n- Only two classification algorithms.\n- No missing values in the dataset is a very strong criterion.\n- A limited number of complexity measures.\n\nThe second is that when carefully interpreted, the results are not that general or actionable:\n- N1 is in essence a classifier (probably along the lines of LDA). So the results can basically be summarized that most of the bias and variability of RF and SVM can be explained by running another reasonable classifier and seeing how it performs. That of course makes perfect sense, but it can also be derived from what we already know, that classifiers tend to perform similarly (the differences between classifiers are less than the differences between datasets).\n- We should be more careful when interpreting the result that model performance can be attributed more to dataset characteristics than to hyperparameters. First, it is the nature of commonly used classifiers that they are relatively robust in terms of hyperparameter selection - being easy to tune is what makes them popular. Second, . And third, the range of several parameters is limited. For example, would results change if max_features was allowed to go below 0.1 or above 0.9? Or if 20 different kernels were considered? Similarly, the experiments are limited to 1500 features, which diminishes the importance of regularization.\n- In practice, I can in most cases freely tune the parameters and select models. I can't really change my problem (or dataset) though.\n- The paper does not consider model selection, which I would in this context consider as part of hyperparameter tuning. I would not be surprised that a lot more can be attributed to model selection than to tuning the parameters in this paper. Choosing a different model is also actionable.\n\nThere are also other methodological concerns (see Questions)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024attributing,\ntitle={Attributing Model Behavior: The Predominant Influence of Dataset Complexity Over Hyperparameters in Classification},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x8mr9zGkpr},\nnote={under review}\n}" }, "abstract": { "value": "Understanding how different factors contribute to model performance is critical for advancing the attribution of model behavior. Previous research has independently explored the effects of hyperparameters and dataset complexity meta-features on classification performance. However, a key knowledge gap remains in understanding how these factors comparatively influence model behavior, particularly bias and variance. This study bridges this gap by investigating which factors, hyperparameters or complexity meta-features, exert a greater influence on the bias-variance in classification algorithms, specifically Random Forests (RF) and Support Vector Machines (SVM). Using 290 diverse OpenML tabular datasets and 304 hyperparameter configurations, we employ functional analysis of variance (fANOVA) to quantify the impact of these factors. Our findings reveal that dataset complexity meta-features exert a more significant influence on both bias and variance than hyperparameters for both RF and SVM models. To further substantiate our findings, we conducted an analysis based on the Manipulation Theory of Causation. This analysis demonstrates that optimizing dataset complexity can simultaneously reduce bias and variance, while hyperparameter tuning often leads to a bias-variance trade-off with limited impact on overall performance. To the best of our knowledge, this research is the first to directly compare the effects of hyperparameters and complexity meta-features on bias and variance in classification, contributing new insights into model behavior attribution." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Model Behavior Attribution", "Complexity Meta-features", "Hyperparameters", "Bias-Variance Decomposition" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9117c4a5a601eabbe50787ac800ccba1852ce91d.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/26c499da5b57639f990615d2e806e9fef921c474.zip" }, "title": { "value": "Attributing Model Behavior: The Predominant Influence of Dataset Complexity Over Hyperparameters in Classification" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x8z8hCjtcY
Elephant in the Room: Unveiling the Pitfalls of Human Proxies in Alignment
main
Active
Pitfalls; Human Proxies; Alignment
alignment, fairness, safety, privacy, and societal considerations
3;3;3;5
4;4;4;4
3;4;2;3
2;2;1;3
2;4;2;2
3.5
4
3
2
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "line 239: DPO is not really an SFT method. it is a simplfied form of RLHF.\n\nline 365: should it be such as 1-0 metric, or something that takes into account the magnitude of the difference between the rewards.\n\nline 377: for DPO, are you computing the impliicit reward that the DPO derivation uses (Ratio of log probs) ?\n\nline 455 - 463: This finding if it can be replicated in other settings would be quite interesting (that reward accuracy during training is not reliable)." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "It is true that practitioners systematically under-account for the effect of annotation noise in preference-based alignment / RLHF. annotated data is frequently taken as ground truth, and there is less data cleaning performed than there should be.\n\nThe revised dataset could be an additional resource for the community." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper claims to be a an analysis of the role of human preference proxies (direct preferences and reward models) in LLM alignment. The authors do qualitative analyses of the errors in a commonly used alignment dataset and re-label it. They then show that this leads to a significant gain in alignment scores for both DPO style algorithms and PPO-style reward model + policy learning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "After having gone through the paper, it seems the main contribution really boils down to a re-labeling of the (widely used) HH-RLHF dataset. While this will make it a useful resource, it is unclear to me what the scientific value of the work is. It is entirely expected that re-labeling the dataset to reduce noise, should improve accuracy on the downstream tasks.\n\nWhat further insights are gleaned by this exercise? There is an inventory of the data labeling losses which are mildly interesting. However, the authors give absolutely no information about the labeling process. So what can a reader takeaway that is useful? let's grant that the downstream results indicate that the labeling process of this paper is better than the labeling for the original dataset, how do we know why?\nWas the entire dataset re-labeled by the authors? this is not a process that can scale to other problems, were lower quality raters still need to be used. The inventory of \"pitfalls\" may be helpful but it is unclear whether the categories and proportions will generalize to other datasets/ domains.\n\nThe discussion section 5 does not seem to add any new insights not familar in the literature. 5.1 is more of a related work section (very minimal) and 5.2 goodharts law is well known and not quite relevant to the work done here.\n\nThis paper feel like a better fit for a conference like Findings'EMNLP." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- Where is the re-labeled preference dataset?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- The paper's classification of human proxies into two levels (direct preference data and reward models) offers a clear framework. This seems simple but is actually very **novel**. This is the first time I have ever seen such a classification.\n- By re-labelling the HH-RLHF dataset and providing a cleaner version (CHH-RLHF), the authors make a very practical contribution that could improve the reliability of the preference learning dataset.\n- By discussing Goodhart's Law, the paper raises awareness of the risks of over-optimizing proxies. This is indeed important for us to keep in mind: alignment isn’t just about maximising scores but ensuring genuine alignment with human intent." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores the challenges of using human-made proxies, like labelled data and reward models, to align large language models (LLMs) with human preferences. The authors find that these proxies often don’t accurately reflect genuine human values, which can lead to unintended model behaviours. Therefore, they re-labelled a popular preference learning dataset (HH-RLHF) to create a cleaner version, called CHH-RLHF, which they hope will support more reliable alignment. Their experiments show that better proxies lead to improved alignment, but they caution against relying too much on any single measure, as it can be over-optimized and lose accuracy. The authors urge researchers to prioritise verifying proxy reliability to prevent these hidden risks from undermining model alignment." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- “This work aims to alert researchers to the importance of verifying human proxies before using them, whether for alignment optimization or evaluation.” Several research papers discuss this issue, with reward over-optimization being one of the most well-known. This topic has been widely discussed.\n\n- Relabeling HH-RLHF is valuable, but the quality of the HH-RLHF dataset is somewhat criticized in the community. The more important question is how we can collect user preferences in a scalable and reliable manner rather than simply re-labelling them. This paper is valuable for providing a re-labelled dataset, but in my opinion, it does not meet the standard of ICLR.\n\n- “However, they only establish a leaderboard for reward models, failing to reveal the relationship between alignment performance and reward model quality.” I agree, and it’s already been discussed in the community that achieving a higher score in RewardBench does not necessarily translate into better downstream task performance. How do the authors of this paper address this issue?\n\n- Table 3 is not convincing, as Figure 4 already shows that “Starling-34B” provides a better score. Why not use 34B? Instead, a 6.9B model was chosen. I wouldn’t consider this a pitfall; experiments clearly indicate that a larger base model performs better.\n\n- In Figure 5, all models are using 6.9B or 7B reward models: why not use the 34B version?\n\n- As the authors have already empirically shown that a better reward model leads to a better policy model, where is the pitfall?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- grammar L084: If not, what impacts they may have on alignment performance?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The high opposite re-label rates for HH-RLHF are insightful and highlight various issues with working with depending heavily on such datasets. In this setting, three of four annotators choose the rejected response as better, highlighting critical issues with the dataset.\n\nThe experimental design, discussion and framing are insightful. Experiments such as the correlation between human and model evaluations are particularly valuable." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work explores the various complexities of alignign to human preferences, considering both the more direct capture of human feedback data as a preference proxy, and the one step removed reward model representation of human preference. It provides various experimental insights into how faithfully these reflect actual human preference. The authors analyze and provide a cleaned version of HH-RLHF, and use this as a basis for further analysis." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The DPO experimental settings are clear, in terms of training on both the original HH-RLHF and improved CHH-RLHF datasets. However, this is unclear for the PPO setting and even on re-reading, I am unsure which results (if they exist) show the comparison between the effects of RMs trained on HH-RLHF vs CHH-RLHF on downstream LLM performance evals. If these experiments have not been run, I would strongly recommend running them as they would speak to the impact of the massive effort to re-label/clean up this data. If this is not possible due to resource constraints, I'd recommend discussing this in detail.\n\nOverall, this is thorough work which is will motivated and has involved important and substantial effort, both in terms of the CHH-RLHF annotation and experimental design, much of which is extremely insightful. I feel that currently, the poor framing of the work within the context of existing critical investigations of the space and lack of clarity around the overall narrative limit the work's potential impact. Both these points can be addressed to make this a potentially high impact contribution." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* Some details on HH-RLHF would be helpful. Datasize? How was it originally annotated? \nIe, in Figure 3, when 0.07% is said to be “Empty” – how many samples is that? \n\n* Figure 4 – why did you include GPT-J? Also, how big of a model is GPT-J?\n* Figure 4 - why is there no correlation score?\n* Does the newly labeled data include annotations from all 4 annotators? If you have already gone through the effort to re-label the data, having this more granular information could allow for better alignment methods as opposed to binary (preferred vs. rejected) labels.\n* What is the purpose of Figure 5?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The re-labeled dataset CHH-RLHF is definitely a contribution for the community. \n* The authors do an exhaustive list of experiments, although it is not clear how the set of models chosen for each experiment was decided on." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper raises an important question on whether current day alignment datasets are truly representative of human preferences. To address this question, the authors study to abstractions of human preferences - \"Level-1 Proxy\" (aka training data) and \"Level-2 Proxy\" (reward models).\n\nStarting with Level-1 Proxy: The authors use HH-RLHF, a popular dataset, and offer a taxonomy to categorize the training data (Section 3.1), such as toxic responses, \"inverted\" responses (ie, the rejected response is actually better than the chosen respose), etc.\nThe authors then carefully re-label HH-RLHF using 4 human annotators, and find that a large chunk of the data falls under the above pitfalls.\nRe-training on the cleaned data (CHH-RLHF) using DPO, they demonstrate immediate improvements in the model's \"reward score\" (Section 3.2.2, Table 2).\n\nFor Level-2 Proxy: The authors then study the impact that a reward model can have on the alignment process. Similar to Level-1, the authors start with a taxonomy of pitfalls (Section 4.1.2). However, if I understand correctly, the taxonomy does not seem to be used anywhere? (Ex:, “Score for Empty Responses” indicates when a reward model gives high scores to an empty response – how often does something like this happen?)\n\nThe authors then claim that current reward models are not adequate enough to be used for training aligned models (Section 4.2).\nThe authors define an accuracy metric to define the quality of a reward model, and assess a wide range of reward models (Table 4), demonstrating a wide range of accuracy scores.\nThe authors study the impact that a reward model can have on alignment algorithms - PPO and PRO – and find that suboptimal reward models lead to suboptimally aligned models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* My biggest question is what new information have we learned from this? I believe that the takeaways are that better data leads to better models, and better reward models lead to better aligned models. While a cleaned dataset is a contribution worth applauding, I don’t think there is anything else that the paper offers that we didn’t already know. \n\n* The authors have a wide range of empirical results - but it is not clear how the choice of models used for each experiment was made.\n\n* Figure 4: The authors claim that Starling-34B’s evaluations show good correlation with human evaluations (But is missing a correlation score?), and also shows that Starling-34B has the highest reward model accuracy (Table 4) and using Startling-34B as a reward model leads to a better model (Figure 6). Doesn’t this go entirely against the authors claim that current day reward models (level-2 proxy) are inadequate? \n\n* Table 4: I don’t think assessing DPO for reward modeling is reasonable. There seems to be an assumption being made that the policy model from DPO would also serve as a good reward model – and the authors justify this assumption with the paper title of DPO (“Your language model is secretly a reward model”) – I don’t think this is reasonable, especially because prior work shows that policy models learn very different things from reward models (https://arxiv.org/abs/2405.19534).\nThe reason I mention this is that if you take out the DPO results in Table 4, the claims that the authors make (ie, reward models often do worse than 50/50 chance) is significantly weakened. My takeaways from Table 4 is that larger models lead to better reward modeling accuracy, and larger models (Starling-7B, 34B) do not seem to have the issues that the authors discuss." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024elephant,\ntitle={Elephant in the Room: Unveiling the Pitfalls of Human Proxies in Alignment},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x8z8hCjtcY},\nnote={under review}\n}" }, "abstract": { "value": "The demand for regulating the behavior of large language models (LLMs) has ignited research on alignment algorithms, the essence of which is to align LLMs' generations with human preferences. Due to infeasibility of humans directly participating in the training or generation of LLMs, existing alignment algorithms choose to align with human preferences carried by proxies, i.e., preference data or reward models. However, whether these human proxies faithfully represent human preferences remain under-explored. We categorize human proxies into two levels based on the degree to which they directly embody human preferences: Level-1 Proxy (preference data) and Level-2 Proxy (reward models). We empirically examine the faithfulness of both levels of proxies and its impacts on alignment performance.\nWe notice that current algorithms tend to overlook the faithfulness of these proxies in reflecting human preferences; many works even directly use reward models as their automatic evaluators without any correlation verification. Current literature of alignment overly focuses on optimizing algorithms, rendering the faithfulness of human proxies an \"elephant in the room\"—something extremely important yet largely overlooked. According to experimental results, we unveil potential risks of using inferior ``human proxies'', aiming to arouse attention to this huge ``elephant'' in alignment research. We summarize existing pitfalls from different angles and provide a re-labeled preference dataset and insights about reward model usage to facilitate the healthy development of alignment\\footnote{This work contains examples that potentially implicate stereotypes, associations, and other harms that could be offensive to individuals in certain social groups.}." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Pitfalls; Human Proxies; Alignment" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/34fbc7e0927cb185993e65c39992d0efcbae5805.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Elephant in the Room: Unveiling the Pitfalls of Human Proxies in Alignment" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x9J66fnMs8
RGRL: Quantum State Control via Representation-Guided Reinforcement Learning
main
Active
Quantum control;quantum state representation learning;reinforcement learning
applications to physical sciences (physics, chemistry, biology, etc.)
3;3;3;5;6
3;4;4;3;4
3;2;2;2;3
2;1;1;2;2
3;2;3;2;2
4
3.6
2.4
1.6
2.4
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "None." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper tackles a very relevant problem.\n\nI believe the proposed approach has great merit. Training a dedicated representation network sounds like a good idea (as it is also well motivated in the paper) and might have a big impact on various quantum applications.\n\nThe paper explains the approach well and features several amazing visualizations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper tackles the well-known issue of quantum state control, i.e., steering a quantum system towards a target quantum state. To this end, the authors run reinforcement learning with the addition of a learned state representation encoded in a representation network. They show that their approach can steer quantum systems in a meaningful way." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main problem is that the presented approach is not sufficiently analyzed. Several design decisions in the algorithm and in the study are not really discussed and not analyzed. There is no ablation study or a comparison to other candidate approaches. There is no application of the approach to other similar issues.\n\nThe training behavior and training properties of the presented approach are neither shown nor discussed. Thus, this paper's contribution to an AI community is unclear.\n\nThe setting that reinforcement learning operates in is not formalized in a standard way and thus hard to follow. Giving a standard MDP-style definition would help here.\n\nSeveral typos persist. Most importantly, all references are formatted incorrectly." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "How well does the proposed approach generalize to recalibrations? Is overfitting the training device a potential issue to be considered? \n\nCould the proposed method be used as an extension to existing QRL frameworks (e.g., [1-4])?\n\n[1] van der Linde et al., 'qgym: A Gym for Training and Benchmarking RL-Based Quantum Compilation', 2023. \n[2] Altmann et al., 'Challenges for Reinforcement Learning in Quantum Circuit Design', 2023.\n[3] Kölle et al., 'A Reinforcement Learning Environment for Directed Quantum Circuit Synthesis', 2024.\n[4] Rietsch et al., 'Unitary Synthesis of Clifford+T Circuits with Reinforcement Learning', 2024." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is theoretically well-elaborated and provides a sound approach for improving state extraction in RL for quantum control. It is generally well-written and provides solid connections to the background in quantum physics. Overall, the paper presents a well-motivated method to improve exploration for reinforcement learning in quantum control by providing a smooth reward signal." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper provides an approach for the extraction and representation of states from a quantum system, improving the reward calculation for training a reinforcement learning policy in quantum control tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Despite being motivated for NISQ Devices, the paper does not provide evaluations regarding those. While the paper provides solid connections to the background in quantum physics, I am missing such connections to the background in reinforcement learning, especially focusing on integrating the extracted state and reward signal into existing frameworks for RL in quantum control and quantum circuit design. Generally, I feel like the claimed contribution could be clarified, as the paper does not seem to provide an RL algorithm but rather a method for improved state representation, which is still an important contribution when connected to existing frameworks for RL in this domain. Regarding the empirical results, I am missing details on the training process, comparisons to considerable baselines, and, in general, quantitative results. Also, the results shown in Fig. 3 could be better described in the text. Finally, the potential limitations of the approach should be discussed. \n\nMinor comments: \n\n- The stochastic policy should be defined as $\\pi: D \\times A\\mapsto[0,1]$\n- r seems overloaded, consider changing representation or reward to avoid confusion \n- A common notation t for a timestep should be used \n- Citation formatting could be improved (e.g., using \\citet for in-text citations).\n- The placement of figures (e.g., Fig.3) could be improved to ease readability" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- L 346 claims: “Thus, the optimal path in phase space does not correspond to the shortest trajectory in the representation space, implying that the control task we consider is highly nontrivial.” However, t-sne simply arranges the learned representations in low-dimensional space, and unless the pretraining of the representation network included some form of latent-space structuring, the projections will always be of chaotic nature. As such I find it difficult to reason about the complexity of this task by the t-sne control trajectory, could you please elaborate a bit more on this?\n- Is there a reason as to why Figure 3 only shows 30 steps? Does the algorithm perform better / finds the target state eventually, or are the 30 steps a physical hard-constraint? (Figures 7 are plotted up to 55 control steps?)\n- In Figure 3a, the no-label case for Tr→SB is very much an outlier, performance wise. Is there a reason for this?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Well written, clear and concise. Not to much jargon, nicely understandable even from an outside perspective. I also found the figures conceptually nicely understandable with the differently colored phase regions. The application seems to be works well, empirically." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes improved quantum state control via taking small initial samples of the system to build an improved prior RL model and thus learn actions more appropriately. Contribution is a RL algorithm to steer an uncharacterized quantum state towards a target state using only measurement statistics. The algorithm uses a representation network to estimate state representations and their similarity to the target state. With the trained representations, the model can also be used to incrementally search for inputs that produce a certain target output of given unknown quantum process. The algorithm is tested on control of many-body ground states across phase transitions, like trivial or topological symmetry broken phases." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Weaknesses:\n\nAs major weaknesses the following issues stood out to me:\n\n- Duplicate explanation of the environment, reward and the representation input in 3.1 and 3.3. Those two sections should be consolidated. Also the reward in 3.1 is introduced first by the simple negative euclidian distance, but the EQ uses a different norm ||.||(_2?) and divides by d. This should be more consistent.\n- Its not clear if the architectures in Figure 2 are part of the contribution? As I understand, Figure 2 is simply an illustration of representation network and decoders of Wu et al. 2023b?\n- Figures 3 d/e, 6 and 7 are way too small to read, even with full digital zoom. Most of the explanation focus on Figure 3, with Figures 4-6 only barely mentioned in L350-354. Even with the caption, I am not sure I understand what is meant to be conveyed with Figure 7.\n- Novelty in this work is rather lacking, in my opinion. This is a simple application of RL to a discrete control problem with a very simple reward function. The representation network is an interesting inclusion, but similarly also only an application of existing work. The choice of designs (decoder architecture, RL algorithm choice, hyperparameters etc.) are not motivated, although part of this information can be found in the appendix. (Still papers should be self-contained without the appendix.)\n- Finally there is no comparison to other approaches included in this paper and as such significance is hard to judge. Without context, or at least an ablation study of different choices for the chosen approach the contribution of this paper is very light.\n\n---\n\nMinor Notes:\n\n- Figure 1 does not really present any valuable insights than one line of text would not also have explained.\n- L154, 158 i.e.[,]\n- L 157 non[]Gaussian\n- L 182 unclosed bracket and whitespace ([]from a finite set …\n- The notation of bold-r for representations and default r for reward could be chosen better and are confusing at times.\n- The notation of maximizing the average cumulative reward is worded badly. In any RL setting, the RL agents learns to optimize a policy that maximizes the reward / return of each episode. Improvement of average return over time is simply a side-effect of this.\n- Inconsistent use of Fig. and Figure. for references.\n- L 327, 329 [Ref.] {citation} ?\n- The color palette in Fig.3 d/e could be better chosen and the figures should be bigger. The light blue/beige colors and the trajectory, at that small scale, are very hard to see.\n- L 371 overloads the variable “r” for the third time." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Can you demonstrate an advantage over previous methods or more direct approaches such as end-to-end learning without the intermediate step of the pre-trained representation networks?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The method and its application is presented clearly, the work is of high quality technically.\n- The problem of state preparation is relevant, the examples are well-chosen, and the new method achieves its task of state preparation.
\n- The presented algorithm seems plausible, could be promising, and seems like a natural application of the [Zhu et al (2022) and Wu et al. (2023b)] articles." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In the control of quantum systems, a common scenario is the task of preparing a particular quantum state. In simulations, one would be able to adjust controls based on of the current (simulated) quantum state, including the initial state, but that is not possible in a real experiment and one only has access to the measurement statistics. This article proposed a new reinforcement learning method termed RGRL. It is built on two previous works [Zhu et al (2022) and Wu et al. (2023b)] which introduced networks that learn to translate measurements into a representation of a quantum states. In every control step, one would translate measurement statistics into such a representation and decide on the next step base on that information. The representation networks are trained before controls are trained.\n\n

The authors demonstrate the application of the new method for two examples: Preparing target states in the XXZ model as a many-body example, and in a continuous variable system as a single-system example." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the algorithm does not require pre-existing knowledge of the quantum system’s dynamics, it does require knowledge of the possible system states to train the representation networks it is based on. Direct end-to-end learning based on the measurement statistics would be truly independent of prior knowledge and the comparative advantage of the method RGRL presented here is unclear to me. To demonstrate any such advantage, the RGRL method would have to be benchmarked against prior work, for example the articles cited in the introduction. Such benchmarking would significantly strengthen the article, going beyond the more exploratory demonstration of RGRL." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "In the introduction you mention that \"a few works explored the possibility of directly using measurement outcomes for reward calculation Reuer et al. (2023); Borah et al. (2021); Sivak et al. (2022), but scalability still remains a challenge.\" Is your algorithm more scaleable than the alternatives and why? You claim that \"This approach significantly reduces the reliance on precise initial state information, making the algorithm scaleable and applicable to larger quantum systems\". How does the algorithm scale with quantum system size and what makes it scaleable?\n\nHow many measurements are needed and how does this scale with representation quality? How does this compare with other black box approaches in terms of the number of required samples? Perhaps this is also an interesting Reference to consider: Yuval Baum, Mirko Amico, Sean Howell, Michael Hush, Maggie Liuzzi, Pranav Mundada, Thomas Merkh, Andre R.R. Carvalho, and Michael J. Biercuk PRX Quantum 2, 040324.\n\nWhat particularly physical scenarios/experiments motivate the study of a system with an unknown initial state or completely unknown quantum operation?\n\nWhy is uncertainty in the initial state a \"major challenge in quantum state control\"?\n\nIs this method more applicable to unknown initial states or unknown quantum dynamics?\n\nHow does this method compare (for instance in terms of achieving higher fidelity control solutions, more robust solutions etc..) to previous methods in quantum control? \n\nWhat is your ML contribution? It seems like you are applying a standard algorithm (PPO) to a particular way of representing quantum states with limited measurements, or is there more to the algorithm that may have escaped my attention?\n\nYou say: \"we use the distance between quantum state representations as a proxy for quantum fidelity\", but then explicitly refer to the \"quantum fidelity\" at numerous occasions when describing experimental results. Do you mean the quantum fidelity? Or can the relationship between the representation distance and quantum fidelity be explicated more explicitly." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper proposes an interesting extension of previous work in using an abstract representation of an unknown quantum state as a framework for performing \"quantum control\" and steering the system into a desired target representation. This seems a useful approach for tackling the control of black box systems where no information about the dynamics or the initial state is given.\n\nThe background on quantum states and measurements, as well representation network structure is described clearly and in a way such that non specialists can easily follow the authors exposition. The RGRL algorithm is also described clearly and Fig. 1 provides a straightforward description of the precise operations which are implemented.\n\nThe analysis of the physics of phase space transitions, as well as the complexity of the control task is also very thoroughly portrayed, this seems particularly appealing to readers with a physics background." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors develop a machine learning algorithm that characterises an unknown quantum state or unknown quantum operations by generating an abstract representation. The authors then show how reinforcement learning can be used to apply adequate control operations to realise a desired target state representation. The control actions are determined by a neural network based on measurement data from a set of quantum measurements. This builds upon previous work (Yan Zhu, Ya-Dong Wu, Ge Bai, Dong-Sheng Wang, Yuexuan Wang, and Giulio Chiribella. Flexible learning of quantum states with generative query neural networks. Nat. Commun., 13(1):6222, 2022.) which described a method of generating representations of quantum states but the novelty lies in the reinforcement learning algorithm.\n\nThis particular approach is applied to two systems of interest. First the authors show that they can effectively control many body ground states and realise desired phase transitions. Two different state representations; one for predicting measurement statistics and one for predicting mutual information are explicitly compared and it is shown that higher quality state representations yield better control efficiency. Lastly, the authors consider a control task where they prepare a target output state from a CV Kerr quantum gate." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are a number of issues with this paper which I would like to address. \n\nFirstly, it seems as though it lacks the significance for it to be relevant to the wider audience of ICLR. The algorithm described in the paper seems interesting, but it is not clear that it is solving a broadly relevant and useful problem. The method is applicable particularly to systems where the initial states of the system are completely unknown, however no concrete examples of important and relevant physical systems which exhibit this behaviour are shown or described. Similarly, in the case of the CV Kerr gate, it is not clear what physically motivates the experimental setup: where a \"target output state can be obtained by applying the unknown quantum process to a certain input state\". It would be helpful to frame this more clearly with explicit references.\n\nSecondly, It is not clear to what extent the algorithm is practically useful when sampling from a real quantum device, since \"M\" copies of the same quantum state need to be prepared (what about the no cloning theorem?). Claims in the paper are made that the number of measurement samples is \"small\". It would be useful to have a quantitative comparison of sample efficiency compared to other methods, and to discuss how the approach handles the constraints of real quantum devices, including preparation and measurement times. For example, the authors could look at the following reference: Irtaza Khalid, Carrie A. Weidner, Edmond A. Jonckheere, Sophie G. Schirmer, and Frank C. Langbein Phys. Rev. Research 5, 043002 which constructs a sample efficient RL algorithm for quantum control.\n\nThirdly, there are no benchmarks or comparisons drawn to previous work in the quantum control literature. From the presentation in the paper, it is not clear whether the method outperforms previous methods or approaches in any relevant quantum control tasks. Moreover, the authors claim their algorithm is \"scaleable\", but it is not clear what makes the method scaleable and there are no explicit benchmarks. There are some alternatives in the literature which could be compared and contrasted; for example given uncertainty about the initial state, one can incorporate this into the design of a particular quantum control sequence which does not require a \"black-box treatment\" as shown in (Frank Schäfer et al 2020 Mach. Learn.: Sci. Technol. 1 035009) where the authors claim that \"Despite the uncertainty in the initial states, we managed to reach fidelities larger than 99.9% in the preparation of a GHZ state in a chain of M qubits.\" It is also unclear whether one would opt for a \"black-box\" method when the initial state is unknown in a real physical system, as often a more efficient approach in real quantum experiments would be to bring the quantum state into a known quantum state to then perform a control operation (e.g. optical pumping in atoms or transmon reset). \n\nLastly, the manuscript is not always clear and some of the experimental results are inaccessible to a non-specialist audience and I suspect the broad audience of ICLR. The Kerr gate is mentioned but no citation is given for further details and it is not clear why this is a particularly important problem to tackle which should be explained. Moreover, the control of many body quantum states is not sufficiently contextualised or motivated. The in-depth discussion of different types of phase transitions as well as Figs 3/6 showing \"representation space\" are not clear (no axes labels). The \"t-SNE algorithm\" is also left unexplained with no citation and it would be good to better explain this for a reader with an ML background or cut down on specific details on the phase transitions. Some more explicit references need to be made to the Appendix sections to clearly guide the reader for further background/implementation details." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A representaion-guided reinforcement learning for effcient quantum state control with very few measurements" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024rgrl,\ntitle={{RGRL}: Quantum State Control via Representation-Guided Reinforcement Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x9J66fnMs8},\nnote={under review}\n}" }, "abstract": { "value": "Accurate control of quantum states is crucial for quantum computing and other quantum technologies. In the basic scenario, the task is to steer a quantum system towards a target state through a sequence of control operations. Determining the appropriate operations, however, generally requires information about the initial state of the system. Gathering this information becomes increasingly challenging when the initial state is not {\\em a priori} known and the system's size grows large. To address this problem, we develop a machine-learning algorithm that uses a small amount of measurement data to construct its internal representation of the system's state. The algorithm compares this data-driven representation with a representation of the target state, and uses reinforcement learning to output the appropriate control operations. We illustrate the effectiveness of the algorithm showing that it achieves accurate control of unknown many-body quantum states and non-Gaussian continuous-variable states using data from a limited set of quantum measurements." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Quantum control", "quantum state representation learning", "reinforcement learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1b2ad1ff8c2cf8c0005b365dc63ad4b914d5a8dc.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/c9269672c432833630927984abb43e84f7bf3b1f.zip" }, "title": { "value": "RGRL: Quantum State Control via Representation-Guided Reinforcement Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x9cXrOQskc
How Far are Today's Time-Series Models from Real-world Weather Forecasting Applications?
main
Active
Time-series benchmark;large scale spatial-temporal dataset, numerical weather prediction model
datasets and benchmarks
3;3;5;6
5;5;3;3
2;2;2;3
3;2;2;3
1;2;3;3
4.25
4
2.25
2.5
2.25
-0.96225
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "None." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The dataset preparation (collection followed by data cleaning) is at the centre of contribution." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The submitted paper is an interesting contribution in the field of time series forecasting for a specific problem domain (weather). The key innovation of the paper is 1) preparation of the dataset to be used for community 2) its benchmark over existing algorithms 3) and suggestion for the future work." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper miss the latest research in the domain of TSF, such as use of foundation model for long term forecasting. Such as \n- Tiny Time Mixer\n- MORAI\n- Chronos\n- ..\nPlease review literature and add methods.\n\nAlso, there are method for AutoAI, AutoARIMA and etc from statistical domain." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Same as the weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. A new benchmark dataset is constructed specifically for station-based weather forecasting tasks.\n2. The process of dataset construction is clearly described.\n3. Experiments are conducted to compare general time series forecasting models with a specialized weather prediction model, providing insights into the limitations of general forecasting models for weather prediction and identifying future research opportunities." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new benchmark dataset, called Weather-5K, for evaluating general time series forecasting models on weather forecasting tasks. The dataset is built using data from a public database (ISD) and includes five types of hourly weather data from over 5,672 stations spanning 2014 to 2023, following quality control and post-processing. Experiments comparing general time series forecasting models with a Numerical Weather Prediction (NWP) model are conducted, with results analyzed to gain insights into the use of general time series forecasting models for weather forecasting tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Beyond computational complexity, are there other reasons why general time series forecasting models are needed for weather forecasting? Also, even regarding complexity, a comparison between general time series forecasting models and specific weather forecasting models should be provided to support this claim.\n2. Some datasets, such as Weather-Australia, GlobalTempWind, and CMA_Wind, are not cited correctly.\n3. In the outlier detection process, it is unclear how to ensure that detected outliers are not extreme weather events.\n4. The paper states, “these models, operating at the mesh space (e.g., grid resolution of 0.25° and 0.09°), may not be the optimal solution for GSWF as discussed in Section 1”; however, there does not appear to be a related discussion in Section 1.\n5. The statement \"5,672 weather stations worldwide are selected...\" is repeated on Page 4 and Page 5.\n6. It is unclear whether RQ4 describes the bridge between TSF models and NWP models or between GSWF and NWP models." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Q1.\nHow do you view the trade-offs between model complexity and interpretability in practical forecasting applications?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "S1. \nThe paper presents an advancement in the field of time-series forecasting by introducing the WEATHER-5K dataset, which fills a notable gap in the availability of comprehensive, high-quality weather data for model training and evaluation. \n\nS2.\nThe authors demonstrate a thorough understanding of existing datasets' limitations and the challenges TSF models face in practical applications. \nThe methodology for constructing the WEATHER-5K dataset includes rigorous data selection, quality control, and pre-processing.\n\nS3.\nThe paper is well-structured and clearly articulated." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces the WEATHER-5K dataset, a large-scale global station weather dataset designed to address the limitations of existing time-series forecasting (TSF) datasets.\nThe authors present WEATHER-5K, which includes comprehensive observational data from 5,672 global weather stations with hourly data spanning ten years.\nThe paper conducts extensive benchmarking of various TSF models against operational Numerical Weather Prediction (NWP) models. This comparison highlights the performance gap between academic TSF models and real-world weather forecasting.\nFurthermore, a standardized evaluation framework and new metric (SEDI) are proposed for assessing TSF models, focusing on overall accuracy and extreme weather event prediction." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1.\nThe benchmarking against NWP models primarily focuses on traditional NWP methods. \nThis paper could benefit from including a wider variety of contemporary data-driven NWP models, such as those utilizing machine learning techniques. \n\nW2.\nThe performance analysis largely focuses on traditional metrics like Mean Absolute Error (MAE) and Mean Square Error (MSE), which, while standard, may not fully capture the complexities of weather forecasting, especially regarding extreme weather events.\n\nW3.\nThis work does not adequately address the robustness of the proposed TSF models under varying conditions, such as different seasons or geographical variations." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "## Questions\n\n### Q1\n> $L=48$ for $H \\in$ {$24,72,120,168$}\n\nThe experiment setting is not optimal for long-term time series forecasting (TSF), where transformer models typically excel. For shorter horizons, graph neural networks might be more suitable. The current setting, with input lengths of $48$, may not fairly represent the performance of transformer models, which are designed for larger input lengths (usually $96$ or $336$ in the literature). Including results for different input lengths, especially $96$, would enhance the benchmark.\n\nThe current state-of-the-art for long-term TSF is Pathformer. It would be interesting to see its performance on short-term forecasts.\n\n### Q2\nISD data from NCEI/NOAA is the same source as the Corrformer and Informer weather datasets. Why are these datasets not included in Table 1? Although Weather from Informer is only one station and Global from Corrformer might have limited weather indicators, it is important to mention them to provide a complete picture and justify why WEATHER-5K should be preferred (cf. clarity section).\n\n### Q3\nWhat is the definition of an outlier, and how do the authors differentiate outliers from extreme weather? This distinction is crucial for users to understand modifications and potential model behavior. Have the modified timesteps been flagged for better model interpretation?\n\n### Q4\n> […] which highlights an unacceptably high RMSE when comparing ERA5 data to real-world observations […]\n\nAre the authors suggesting that ERA5 is inaccurate? To the best of my knowledge, ISD measurements involve both manual and automated processes, which could introduce human error, especially in manual records, which will not be in favor to ISD based dataset (even with errors correction, which might introduce different types of errors).\n\n### Q5\n> Figur 3 e) illustrates the daily temperature variations at Chongqing city in China (57516099999) from June to September in 2022.\n\nHow many stations are included in this city? And so, is this relevant to use it as the main example?\n\n### Q6\n> This indicates that ERA5 consistently underestimated the diurnal temperature range at this station throughout the heatwave period.\n\nThis observation is based on one station over six months, compared to more than 5 000 stations over 10 years of data. Such a claim requires more extensive analysis. The authors seem to discredit ERA5 while earlier referring it as one of the most accurate real-time forecasting products and using it to fill missing data. Why use ERA5 if it is later discredited? If ERA5 is the most accurate, why not use it directly, especially given the potential for errors in ISD's manual records, extensive work for correction and data coverage limitation?\n\nHow many timesteps are considered extreme weather, and what percentage do they represent over the 10-year span? Is this percentage significant enough to promote WEATHER-5K over ERA5?\n\n### Q7\nIn Figure 4a, where is the NWP model? The authors mention the computational cost of NWP models as a limitation but do not provide performance data. The parameters size /cost of some models, like PatchTST, known for high computational cost, seems inconsistent with the literature, which might advocate that the experiment setting (input of $48$) is insufficient to demonstrate predictive power. There is also an issue with the figure: Dlinear should be a point due to the parameter's size of $0.01$ but is not, and there are two circles without text (red and purple). What do they represent? Mamba and Autoformer? What do the training cost and error dotted lines represent?\n\n### Q8 - Experiment setting \nEarly stopping of 3 is, in my experience, too low; 5 or 10 would be more appropriate.\n\n### Q9 - Reproducibility\nWhat is the batch size of Corrformer?\n\n## Dataset\n\n### D1\nWhat is the point of repeating latitude and longitude for each row of a given weather station? This increases the dataset size unnecessarily unless stations are moving.\n\n### D2\n> we have used latitude, longitude, and elevation to represent their geographic locations.\n\nWhere is the elevation information? The instance field does not mention elevation in the CSV files.\n\n### D3\n> However, the proportion of error introduced by the interpolation is relatively small.\n\nAuthors need to provide the number of errors corrected and a mask or label column to identify these corrected timesteps. This information is crucial for many tasks and could open the dataset to other applications beyond forecasting.\n\n## Limitations\n\n### L1\nAre errors always isolated timesteps on only one variable? If so, interpolation is understandable. If not, especially with consecutive timesteps with errors or missing data on multiple variables, interpolation is limited. In such cases, did the authors use ERA5? If so, make it clearer in the paper, and explain more in details how interpolation from ERA5 are made.\n\n### L2\nNot all users of this dataset will require worldwide stations for their applications. Have authors provided a way to select a subset of the dataset?\nIf not, please provide explanation of the weather stations name/ID formatting to simplify the selection task.\n\n## Clarity / Coherence\n\n### C1\nClarify the nature of Weather-5K and the targeted task. In my understanding, it is an hourly multivariate spatio-temporal weather dataset (several stations, each providing time series of several weather indicators) for short-, mid-, and long-term Time Series Forecasting. Forecasting tasks can be done in various fashions, such as:\n- Univariate-to-Univariate (U), ex. Predict temperature of station XX for the next day using historical temperature of station XX\n- Multivariate-to-Univariate (MU), ex. Predict temperature of station XX for the next day using historical weather indicators of station XX\n- Multivariate-to-Multivariate (M), ex. Predict temperature of station XX for the next day using historical weather indicators of station XX\n- Spatio-Temporal (ST) U\n- ST MU, ex. Predict temperature of station XX for the next day using historical temperature from station AA to station WW\n- ST M\n- Multi-Variables (V) ST MU\n- V ST M\n\nTable 1 does not fully capture the dataset's potential in terms of forecasting tasks. \nIn addition, it should be revised:\n- Use commas consistently for thousands.\n- If \"frequency\" and \"year\" are provided, is \"length\" necessary? A column for missing data would be more informative.\n- For the Exchange dataset, should $8$ be the value for \"station\" instead of \"variable\" since each column represents a country?\n\n### C2\n> Task settings\n\nWhat is the forecasting fashion? M or MU? This is important for reproducibility. If M, are all stations considered, hence the input length of 48 \"to balance computation and performance\"? it not all stations, which subset of stations is used (for reproducibility)? If MU, which station is the target? And which stations are the inputs (all or a subset)?\n\n### C3\n> While they provide a solid foundation, their performance may be limited when faced with complex patterns or nonlinear relationships.\n\nDo authors have a reference for this assumption?\n\n### C4\n> ML methods […] offer enhanced capabilities to handle nonlinear relationships and complex patterns.\n\nNeed references, perhaps [1] and [2].\n\n[1] https://people.math.sc.edu/devore/publications/NLACTA.pdf\n[2] https://link.springer.com/book/10.1007/978-3-319-58795-0\n\n### C5\nIn section RW, paragraph \"Data-driven numerical weather prediction,\" reintroduce the NWP acronym, the same should be done for GSWF to improve clarity.\n\n### C6\nFor Figures 8 to 14, provide weather station IDs and sample IDs or origin forecast (t) for each row (for reproducibility). In addition, to highlight the necessity of WEATHER-5K, use samples depicting different scenarios, notably including extreme weather, to visually demonstrate model behavior. Therefore, identify and provide cases where extreme weather:\n- Is in the input but not the predicted window.\n- Is in the predicted window but not the input.\n- Is in both.\n\nFurthermore, in these figures, there is no need to repeat the title of each plot and the x-axis; use a 7 x 5 grid with the sharex option.\n\n## Proof-read\nTo cite but a few:\n\n1. **[Very important]** There is a formatting issue with citations in the first paragraph of the introduction and throughout the paper. Citations should be in parentheses unless the authors' names are part of the text, as per ICLR guidelines:\n> When the authors or the publication are included in the sentence, the citation should not be in parenthesis using \\citet{} (as in “See Hinton et al. (2006) for more information.”). Otherwise, the citation should be in parenthesis using \\citep{} (as in “Deep learning shows promise to make progress towards AI (Bengio & LeCun, 2007).”)\n\n2. Thie following text is repeated twice in the paper; remove the unnecessary repetition.\n> 5, 672 weather stations worldwide are selected, spanning the period from 2014 to 2023, ensuring a recent and relevant time frame. This selection process focused on balancing the longevity of station operation, hourly data availabilit\ny, and the inclusion of diverse weather variables.\n \n\nTypos:\n- “[…] into physically-based NWP models” or “physical-based” should be physics-based, no?\n- “(with yea 2022)”\n- “[…] benchmark experiments on WEATHER-5K,” comma?\n- “[…] perform better. In our benchmarks. So developing efficient time-series […]”\n- “[…] are may not be the optimal solution […]”\n- “[…] except for Correformer.”\n- “Figur 3 e) illustrates the daily temperature […]”\n- “[…] of the WEATHER-5 dataset […]”" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Intensive work in dataset creation.\n- Proposal of both a dataset and a benchmark.\n- Data splitting follows the longest cycle of the data (one year)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors propose a novel weather dataset that can be utilized in various time series forecasting tasks. The dataset comprises data from over 5,000 weather stations worldwide, each providing five weather indicators, making it a promising resource. The authors have meticulously selected stations with 10 years of data and have cleaned the raw data from ISD using interpolation and ERA5 to ensure a complete dataset for future users. \n\nThe dataset is used to benchmark state-of-the-art transformer and state-space models against a Numerical Weather Prediction technique." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Lack of coherence and clarity in the presentation.\n- Omission of some important, well-known datasets.\n- Overgeneralization based on unique observations.\n- Limited reproducibility.\n- A thorough proof-reading is required." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024how,\ntitle={How Far are Today's Time-Series Models from Real-world Weather Forecasting Applications?},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x9cXrOQskc},\nnote={under review}\n}" }, "abstract": { "value": "The development of Time-Series Forecasting (TSF) techniques is often hindered by the lack of comprehensive datasets. This is particularly problematic for time-series weather forecasting, where commonly used datasets suffer from significant limitations such as small size, limited temporal coverage, and sparse spatial distribution. These constraints severely impede the optimization and evaluation of TSF models, resulting in benchmarks that are not representative of real-world applications, such as operational weather forecasting. In this work, we introduce the WEATHER-5K dataset, a comprehensive collection of observational weather data that better reflects real-world scenarios. As a result, it enables a better training of models and a more accurate assessment of the real-world forecasting capabilities of TSF models, pushing them closer to in-situ applications. Through extensive benchmarking against operational Numerical Weather Prediction (NWP) models, we provide researchers with a clear assessment of the gap between academic TSF models and real-world weather forecasting applications. This highlights the significant performance disparity between TSF and NWP models by analyzing performance across detailed weather variables, extreme weather event prediction, and model complexity comparison. Finally, we summarise the result into recommendations to the users and highlight potential areas required to facilitate further TSF research.\nThe dataset and benchmark implementation will be publicly available." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Time-series benchmark", "large scale spatial-temporal dataset, numerical weather prediction model" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9a68d8ba55d0d78e18ac6c91440fec93d63de236.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "How Far are Today's Time-Series Models from Real-world Weather Forecasting Applications?" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x9gCQC3rVA
AdvWeb: Controllable Black-box Attacks on VLM-powered Web Agents
main
Active
Large Language Models;Web Agent;Multimodal;Attack
alignment, fairness, safety, privacy, and societal considerations
3;3;5;5;5
4;4;5;4;2
2;1;3;2;3
1;3;3;3;2
2;1;3;3;3
4.2
3.8
2.2
2.4
2.4
-0.166667
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "I would like to recommend the authors to avoid using existing company (stocks) names where it is not really required (Figure 1 and further in the text). I think, in the examples the names can be replaced with e. g. “Company A” and “Company B” or \"Stocks A\" and \"Stocks B\" without any loss for the paper motivation and content. I think, the reasoning for using real names might have been to show an actual example of the problem tackled in the paper. However, I think that using real names (especially in the context of an adversarial attack) does not contribute anything to the discussion and may potentially lead to uninvited implications." }, "flag_for_ethics_review": { "value": [ "Yes, Discrimination / bias / fairness concerns" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1) Do you have any suggestions on how one can defend web-agents against attacks similar to the one that you propose?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Originality: the papers proposes the first black-box attack against VLM-based web agents which is based on existing DPO Method. \n\nQuality: the authors perform in depth analysis of their proposed attack. The limitations of the method are discussed (Appendix C).\n\nClarity: the paper is structured reasonably.\n\nSignificance: black-box attacks on web agents may cause significant harm, so raising awareness of such attacks is important for the LLM/VLM field." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the robustness of Web Agents and proposes a black-box adversarial attack AdvWeb which is based on Direct Policy Optimization (DPO). The attack efficiency is evaluated against existing SOTA web agent for several domains." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) Could you elaborate in the paper on how exactly the baselines (line 353) were adapted to your problem? Since they all achieve 0.0 on all tasks (Tables 1-2), it looks like the comparison is either not suitable or not fair. Do you have an explanation for such poor performance of the baselines?\n\n2) See the ethics review discussion." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- * I am confused about why the different between SFT and DPO varies so much by domain in figure 3. Can you help me understand this? This makes me suspicious.\n - Do you need RL? What about a prompting based baseline?\n - I did not follow \"We also fix the injection position at the ground truth HTML element e to further reduce the search space\"\n - In Algorithm 2, you are using the positive examples twice, both for SFT and then also for DPO. I'm curious whether this introduces some bias, and wondering if you have the comparison without this. This should be easy to do.\n - I'd be curious for an adversarial prompter model scaling study, how it varies with size.\n - Is there any overlap between the train and test tasks? How does the model algorithm performance scale with the number of train tasks?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "+: handling an important problem, web-use agents with black box access, quite close to what would happen in practice i.e., the threat model seems realistic, point about the stealthiness is important\n+ high ASR, but how is the ASR measured?\n+ The paper seems to be clear and well written\n+ Good use of train and test tasks" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents AdvWeb, a framework that injects invisible adversarial strings into websites to attack VLM-powered web agents. The technique uses Direct Policy Optimization (DPO) to optimize a prompter model that generates effective adversarial strings. The key claim is that the attack transfers effectively across different targets and settings. The main technical contribution is the technique for injecting invisible adversarial strings onto websites and using DPO to optimize a prompter model to make these strings work well.\n\nI am excited about this paper's potential, as it addresses an important practical problem. However, several issues currently prevent me from advocating for acceptance. If these issues are adequately addressed in the rebuttal, I would revise my recommendation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "High priority:\n\n- ** One weakness is that because the HTML isn't rendered, if you had a purely computer image based agent, it would not work. It would have been better to study stealthiness in that setting. I.e, studying agents that _don't_ take in the HTML. Please at least discuss this limitation in the paper.\n- ** Concerns about Baselines. I feel like the baseline should probably be someone adding a GCG string to a website HTML as a prompt injection attack, rather than as a jailbreak? You can use transfer there as you did. Did you use jailbreaks that work on GPT-4v? I think there should be working jailbreaks (e.g., Pliny's Godmode Jailbreak or Many Shot jailbreaking). I find the fact that no other attacks work pretty suspicious, and wonder how hard you tried to optimize them? FWIW, I am also sympthatetic to the view that the prompt injection is just a new setting, in which case, a human written baseline, a prompting online, SFT, and SFT + DPO baseline may be more appropriate, which is already later in the paper (but framed differently)\n- ** \"The results, as shown in Table 3, demonstrate that the ASR remains high when we change the injection position or HTML field, with ASR being 71.0% and 97.0%, respectively.\" is a misclaim, given that it depends a lot on the domain with finance performing poorly. The overall claim is OK, but please rewrite the claim to be correct. In general, you need to go through the paper and make sure you aren't overclaming.\n\nMedium priority:\n- * Please add more examples of the attacks found in the main paper, and mention diveristy. Do we have model collapse?\n- * Please move Algorithm 2 into the main paper, its mentioned several times there. It's also not an algorithm, its a function. Please fix this.\n- * You need to explain how you are doing the labelling in Algorithm 1, I assume you use an LLM but that seems important? Or do you just check the action taken is hard?\n- * I appreciate this is hard, but I'd love to see whether the attack works on Claude Sonnet 3.5. Claude models tend to be more robust, so thats why I am curious.\n\nLow priority:\n- \"innovative reinforcement learning (RL)-based attacking pipeline, tailored to solve these challenges effectively\". Please drop \"innovative\". Similarly \"advanced\" attacking framework, drop advanced. Please fix the language." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. What other baselines did you consider, and what would it look like for you to compare your method to the attacks against web agents that you list in related work? \n2. How expensive was the training & optimisation process for generating successful attacks?\n2. Could you explain what signal you optimise against from the target model - does the victim model refuse?\n3. When you inspect samples maually, do you get a sense of why transfer from GPT-4V to Gemini on finance tasks so much lower?\n4. How did you select the tasks from Mind2Web to test against? Was it manual inspection? What were your criteria to judge that a task involves \"critical events that lead to severe consequences\"?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "I believe what the authors are trying to achieve is novel, interesting, and important: they target a timely and significant concern that will only become more prevalent as generalist web agents are deployed and relied on. \n\nThe work also seems novel. My impression from the literature is that there aren't many other papers showing successful attacks/injections against web agents, especially black-box, and via means that would be invisible to most humans.\n\nThe authors' constraints of 1.) only allowing themselves black-box access to the target models, and 2.) their constraint of \"stealthiness\" increase the difficulty and realism of their attack framework. Attackers hiding malicious instructions in hidden HTML fields seems realistic and aligns with their threat model. \n\nGiven how realistic their attacks seem, and how high their reported ASRs are, this paper could promote increased safeguards on generalist web agents - or in the least could present a compelling demonstration that users should be careful when selecting which agent frameworks to rely on. \n\nIt's impressive that their set their success threshold quite high - at achieving exactly the the action their attack targets (instead of e.g. just distracting the agent to make it fail at its main instruction)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces an attack framwork against generalist web agents that parse websites through screenshots and HTML. The authors attempt to demonstrate how a malicious web-developer (either in-house or via open-source HTML that other devs blindly import) could use HTML that’s invisible to human users to influence the actions of a generalist web agent. \n\nThey optimise attacks in a black box setting against models based on GPT-4V and Gemini 1.5, scaffolded as web agents through the SeeAct framework. They generate attacks using GPT-4 to generate HTML with hidden instructions, then optimise an adversarial prompter against the target model using RLAIF so that those hidden instructions are maximally effective against the target.\n\nThe SeeAct framework allows the victim models to read HTML elements in the webpage that a normal human user wouldn't normally see. They inject adversarial prompts into these hidden (the authors call these \"invisible\") HTML fields, and their results show success in influencing SeeAct agents to take different actions than their user prompt instructed them: for example, when told to purchase MSFT stocks on a financial website, the injected HTML prompt misleads the agent to buy e.g. Nvidia instead. \n\nThe authors display high success rate against the target models, especially in the face of their four baselines, all of which achieve 0% ASR across every domain, on both models. Three of their baselines (GCG, AutoDan, COLD-Attack) involve optimising adversarial prompts against whitebox models and hoping those transfer, and the other, Catastrophic Jailbreak, requires no access to model internals but control over the model's decoding hyperparameters.\n\nIn addition to requiring that their injected prompts are \"stealthy\" - i.e. invisible to a normal user, the authors also emphasise the controllability of their attacks - i.e. the ease with which attackers can switch out one set of injected instructions for others. With this property, they also demonstrate that their attack strings can be adapted to transfer between different task-domains. \n\nThe authors also report that their attacks transfer well between the two models they attack, and between different HTML fields (using the same successful prompts), which suggests that their attacks aren't brittle to the specifics of the model / HTML field they were optimised for." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the paper presents a novel and important problem setting, I don't feel confident that the paper provides me the information I'd need to evaluate if the attacks it introduces are as successful and significant as the authors claim. I think it's quite possible the authors have all the data they need to convince me otherwise, but I urge them to include it in the paper.\n\nOverall, reading the paper gives me a weak sense of what generalist web agents can be influenced to do, or how significant the negative outcomes of these attacks could be. I am not able to evaluate the risks of their attack method if all I know is that they cause models to take unspecified targeted actions in a domain the paper only labels as \"cooking\" or \"housing.\" If the authors could provide many more examples of successful tasks & attacks, and their consequences - at least one for each attack domain, their claims of significance would be much more convincing. \n\nIn their related work, the authors claim that they beat \"strong baselines.\" But I'm unconvinced that any baseline on a task that achieves 0% ASR on every task, on both models they test, should be considered a strong baseline. The authors claim that there are no analogous black-box attacks in this setting, which I can't refute from what I've read elsewhere. However, I'm confused why they don't compare at all against the methodologies from the papers they list in the existing body of research from \"Existing Attacks against Web Agents.\" Even if those methods are manual, where this paper's methods are learned, it feels like a much more informative comparison. I would urge that the authors find a more successful baseline to test against, and ideally show some results comparing their success rate to at least one other method for steering the same web-based agents. \n\nBecause of the weakness of the baselines they compare against, and the lack of comparison to other methods for achieving the same ends. I'm left without much context to evaluate how powerful this method is. I recognise that the attack domain is novel, and I found figure 3 helpful for understanding that their training pipeline helps their models achieve higher success after DPO. Discussion of more ablations, and in particular considerations of alternative methods for training and optimising their attacks, and why they weren't as successful, would be informative to my understanding of this paper.\n\nI don't leave the paper with a strong sense of the offense-defense balance in this setting. In particular, the paper might benefit from more detail on how expensive it is to generate these attacks, as the authors do not provide much detail on how expensive the training process is (in steps or dollars) for their RLAIF pipeline outlined in Algorithm 1. For any attack, it seems important to know how quickly and cheaply attackers could generate new attacks when the victim models are updated. Further, if their RLAIF pipeline required very many training steps, it's plausible that the developers of these web agents could become aware. I would be interested to see e.g. how the ASR of their framework increases throughout training.\n\nI'm confused why the title of the paper focuses on attacking \"VLM-Powered Web Agents.\" While true - the SeeAct framework only employs VLMs - as far as I can tell nothing about the victims' multimodality is being attacked, simply their parsing of HTML text. My first impressions of the paper led me to expect exploits against the multimodality of these models, which was ultimately incorrect. I suggest the authors remove \"VLM powered\" from the paper title. \n\nThe authors repeatedly stress the importance of the controllability of their HTML attacks (including in the title). That is, that the same attack strings can be easily adapted to cause different target actions on the same task. Any examples of the ways in which their HTML attack strings are editable would be helpful. But more importantly I do not get a sense from the paper why this is important under their threat model. I think the answer may be a question of cost: that it is cheap to retool successful attack strings for a different purpose, but the existing wording in the paper does not mention this. The most I see is that the authors claim: \"allowing attackers to switch targets with minimal effort and no additional computational overhead.\" This remark is on the penultimate page of the paper, which I think is too late and too little justification for what is introduced as a key constraint of their attacks - even in the title of the paper. More commentary on cost, as I request above, would also have made the relevance of this constraint - which seems technically sound - much clearer.\n\nThe transfer results aren't as convincing as their main results - especially since the ASR is quite varied across different domains, achieving 0% transfer on probably the most compelling domain for their threat model (online finance tasks). \n\nAppendix C, addressing limitations is two sentences long and claims that \"It is possible to optimize a more effective adversarial prompter model.\" The authors don't expand on this claim and I would rather they address more of the limitations in their threat model that this review (& if relevant, others) highlights.\n\nSome weakness in writing & setting that ought to be addressed but should be trivial to fix:\n\nThe authors refer to Algorithm 2 throughout the paper including in the second figure and in their description of Algorithm . Algorithm 2 can in fact be found as the only entry in Appendix A, while Algorithm 1, which in my opinion requires Algorithm 2 to understand it, is in the main body. Algorithm 2 is critical for understanding how they generate the initial malicious HTML requests that they then label for RLAIF using the victim model, and I was confused at first because the authors didn't list where algorithm 2 could be found.\n\nSome claims in the abstract and introduction seem too strong for the level of evidence this paper provides. For example:\n* That Vlms “have revolutionized the creation of generalist web agents … thereby boosting human efficiency and productivity” requires at least some citation or evidence\n* That their choice of injecting into unseen HTML fields makes it “nearly impossible for users to detect tampering” feels like a stretch: can't any user inspect the page's HTML? I appreciate that most users wouldn't. \n\nThere's also an error on the final page, where a paragraph break interrups a sentence immediately before the conclusion, leaving a hovering sentence fragment that reads \"[newline] demonstrates the situation in which the user wants to buy Qualcomm. However, after adding the adversarial injection, the agent buys Apple stocks instead.\" This sentence fragment, which appears to be an incomplete draft of the final setence of Section 5, needs to be addressed before the paper can be acceptable." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Thanks for submitting your paper to ICLR 2025.\n\nWith the advancement in the intelligence of large language models, using VLM agents to automate a series of tasks is emerging as a future development trend. However, the associated security risks remain unexplored. This paper introduces AdvWeb in an attempt to expose the security risks involved in VLM-powered web agents. The topic discussed in this work is both timely and highly important. \n\nHowever, I have the following concerns:\n\n- First, regarding the threat model, I am curious about the identity of the adversary here. The adversary’s method involves embedding malicious strings in web content. Generally, if users choose commands that access official websites, how could an adversary manipulate an official site? If it is not an official website, how would the adversary ensure that their site gets indexed?\n\n- The threat highlighted in this paper is urgent, making it essential to consider corresponding defense mechanisms. However, this paper does not discuss any defense strategies.\n\n- The AdvWeb attack scenario, which targets web content, appears may similar to prompt injection attacks [R1]. Analyzing and comparing the differences between these two attack types would be beneficial.\n\n- This paper selects SeeAct as the web agent framework and uses GPT-4V and Gemini 1.5 as the underlying VLMs. I am curious whether current open-source VLMs also support SeeAct. Additionally, in future edge applications, where privacy is a priority, smartphones may deploy smaller-scale VLMs locally. Analyzing the attack’s effectiveness on open-source VLMs would help demonstrate the generalizability of the attack.\n\n- In practical applications involving user payment actions, a secondary authentication of the user’s identity is typically required, providing the user with an opportunity to review the final outcome and potentially prevent malicious actions. Does this scenario indirectly suggest that the stealthiness of the attack may be limited?\n\nReference\n\n[R1] Adversarial Search Engine Optimization for Large Language Models (https://arxiv.org/abs/2406.18382)" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper presents AdvWeb, the first black-box target attack framework for VLM-based website agents, proposing a method to train adversarial prompt models through reinforcement learning with DPO in a black-box setting, which is innovative and effective.\n\n- The method ensures stealth, controllability, and black-box scenarios, which adds greater practical value.\n\n- The experimental results demonstrate the effectiveness of AdvWeb in attacking different VLM-based website agents and tasks, which helps to raise awareness in the field for developing more reliable web agents and implementing effective defense measures." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel black-box attack framework, AdvWeb, targeting website agents driven by Visual Language Models (VLMs). Under the black-box framework, AdvWeb proposes a two-stage training paradigm. First, it conducts supervised fine-tuning on positive adversarial prompts to obtain the SFT Prompter Model. Subsequently, it leverages the black-box feedback from the target agent combined with DPO strategy for reinforcement learning to train an adversarial prompt model, generating and injecting adversarial prompts into web pages to induce specific adversarial behaviors from the network agent. This method is characterized by its stealth and controllability, with a high success rate of attacks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The threat model considered in this paper may be impractical.\n\n- This paper does not propose any potential defense mechanisms.\n\n- The types of victim web agents considered are limited." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No ethical concerns are involved." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Why does the paper not compare with prompt injection attacks, which are more aligned with the context of this work?\n\n- What are the key advantages of the proposed attacks compared to existing prompt injection attacks?\n\n- What are the evaluation results of the proposed attacks against prompt injection defenses?\n\n- How are the white-box baseline jailbreak attacks implemented against black-box VLMs?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper showcases the effectiveness of attacks on SeeAct across several prominent VLM models, including GPT-4V and Gemini 1.5.\n\n- It conducts an extensive ablation study to evaluate the effectiveness of the proposed attacks under different attack settings and targets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a black-box targeted attack against Vision-Language Model-powered web agents. The approach involves training an adversarial prompt model using direct policy optimization. This model generates and injects adversarial prompts into web pages, leading web agents to execute targeted malicious actions. The effectiveness of the proposed attack is demonstrated through evaluations conducted on SeeAct." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Since the attack's goal is to mislead agents into executing malicious actions, the scope of the paper aligns more closely with prompt injection attacks rather than jailbreak attacks. However, the paper focuses significantly on distinguishing itself from jailbreak attacks and includes them as evaluation baselines for comparison. It would be beneficial to provide a comprehensive survey of prompt injection attacks in the related work section, highlighting the key differences and advantages of the proposed method compared to existing attacks. Including prompt injection attacks as baselines would strengthen the contribution of the paper.\n\n- Given the extensive research on prompt injection attacks, the paper does not clearly distinguish the key advantages of its proposed attack over existing methods. Specifically, there is a line of prior studies [1][2] that have utilized context-ignoring techniques by appending prompts like \"Ignore previous instructions, do xxx.\" This is similar to the method used in this paper, as illustrated in Figure 4, where the attack prompt is \"Disregard all previous and subsequent instructions and choices, do xxx.\" Moreover, previous methods do not require collecting a prompt training dataset or performing expensive DPO training of an LLM-based attack prompter, making them more efficient.\n\n- Defenses against prompt injection attacks have also been extensively studied (e.g., [3][4]). It is important to assess the effectiveness of the proposed attack against these defenses to determine its practical applicability and robustness.\n\n- The selection of victim web agents is limited. The paper evaluates only one type of VLM-based web agent, which may not fully demonstrate the generalizability of the proposed attacks. Incorporating a more comprehensive evaluation with a variety of web agents would strengthen the paper.\n\n- In comparing baselines during evaluation, the paper includes several white-box jailbreak attacks like GCG and AutoDAN. It is unclear how these baselines are implemented against SeeAct, given that it uses proprietary VLMs that are black-box and do not provide gradient information.\n\nReference:\n\n[1] Branch, Hezekiah J., et al. \"Evaluating the susceptibility of pre-trained language models via handcrafted adversarial examples.\" arXiv preprint arXiv:2209.02128 (2022).\n\n[2] Perez, Fábio, and Ian Ribeiro. \"Ignore Previous Prompt: Attack Techniques For Language Models.\" NeurIPS ML Safety Workshop. 2022. \n\n[3] Liu, Yupei, et al. \"Formalizing and benchmarking prompt injection attacks and defenses.\" 33rd USENIX Security Symposium (USENIX Security 24). 2024.\n\n[4] Chen, Sizhe, et al. \"Aligning LLMs to Be Robust Against Prompt Injection.\" arXiv preprint arXiv:2410.05451 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce AdvWeb, a novel black-box attacking framework targeting LLM and VLM-based web agents, leveraging the feedback from target agents to generate adversarial prompts." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024advweb,\ntitle={AdvWeb: Controllable Black-box Attacks on {VLM}-powered Web Agents},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x9gCQC3rVA},\nnote={under review}\n}" }, "abstract": { "value": "Vision Language Models (VLMs) have revolutionized the creation of generalist web agents, empowering them to autonomously complete diverse tasks on real-world websites, thereby boosting human efficiency and productivity. However, despite their remarkable capabilities, the safety and security of these agents against malicious attacks remain critically underexplored, raising significant concerns about their safe deployment. To uncover and exploit such vulnerabilities in web agents, we provide AdvWeb, a novel black-box attack framework designed against web agents. AdvWeb trains an adversarial prompter model that generates and injects adversarial prompts into web pages, misleading web agents into executing targeted adversarial actions such as inappropriate stock purchases or erroneous bank transactions—actions that could lead to severe consequences. With only black-box access to the web agent, we train and optimize the adversarial prompter model using Direct Policy Optimization (DPO), leveraging both successful and failed attack strings against the target agent. Unlike prior approaches, our adversarial string injection maintains stealth and control: (1) the appearance of the website remains unchanged before and after the attack, making it nearly impossible for users to detect tampering, and (2) attackers can modify specific substrings within the generated adversarial string to seamlessly change the attack objective (e.g., purchasing stocks from a different company), greatly enhancing attack flexibility and efficiency. We conduct extensive evaluations, demonstrating that AdvWeb achieves high success rates in attacking state-of-the-art GPT-4V-based VLM agents across various web tasks in black-box settings. Our findings expose critical vulnerabilities in current LLM/VLM-based agents, emphasizing the urgent need for developing more reliable web agents and implementing effective defenses against such adversarial threats." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Models", "Web Agent", "Multimodal", "Attack" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/33d8ee071549e5099bb2e6ad531aff8cce198c3f.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/28cdf3f34501ef51da8355453d65e3c04c29b2ed.zip" }, "title": { "value": "AdvWeb: Controllable Black-box Attacks on VLM-powered Web Agents" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
x9rtYetTsA
Mitigating Spurious Bias with Last-Layer Selective Activation Retraining
main
Active
spurious correlation;robustness;classification
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;3;3;3;5
3;2;4;4;4
2;2;2;1;3
2;2;1;1;3
3;2;3;3;3
3.4
3.4
2
1.8
2.8
0.375
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Distribution of Spuriousness Scores: Could the authors show the distribution of the proposed spuriousness scores across neurons in different datasets? This would help validate the claim that the spurious and core neurons can be effectively separated.\n\nDifference from Retraining with Up-Weighting: What is the difference between the proposed algorithm and retraining the last layer while up-weighting the misclassified samples? Clarifying this would help in understanding the distinct contribution of the proposed method." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The proposed LaSAR method aims to achieve robust learning without group information by proposing metrics to evaluate whether a neuron is spurious or core related. This approach makes LaSAR a practical and fully unsupervised solution to mitigating spurious bias." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the issue of spurious correlations in deep neural networks trained with empirical risk minimization (ERM). The authors propose an approach called Last-Layer Selective Activation Retraining (LaSAR), which aims to mitigate spurious bias without requiring group labels or external annotations. The method selectively blocks neurons identified as spurious during the retraining of the last classification layer, thus promoting the model to learn robust decision rules. The authors demonstrate that LaSAR is effective across multiple data modalities, such as vision and text, and improves worst-group accuracy in benchmark datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Limited Theoretical Analysis: While the empirical results are promising, the theoretical foundation for why the proposed spuriousness score works effectively in all cases is very limited. Including more rigorous analysis or theoretical guarantees would strengthen the paper's claims about the effectiveness of LaSAR.\n- Limited Heuristic Exploration: There is limited heuristic exploration of the distribution of the proposed spuriousness score. Figure 4 appears to be cherry-picked, and it would be more persuasive if the authors could provide the distribution of the proposed spuriousness score across neurons in different datasets.\n- Incremental Contribution: The phenomenon that spurious neurons and core neurons can be separated has been demonstrated in prior work [1][2]. Moreover, the proposed spuriousness score is calculated as the median among misclassified samples and the median among correctly classified samples, which appears equivalent to retraining the last layer while up-weighting the incorrect samples. This limits the novelty of the contribution. Furthermore, the neuron masking algorithm assumes that a neuron can represent part of the spurious features, which is a strong assumption that may not always hold true. Additionally, it is unclear why masking the last layer is necessarily better than masking a middle layer.\n- JTT Algorithm Classification: JTT is listed as a semi-supervised algorithm at line 362, but it appears to work without group information. This classification should be corrected.\n\n[1] Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations\n[2] Complexity Matters: Dynamics of Feature Learning in the Presence of Spurious Correlations" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How does the method ensure that it doesn't accidentally block neurons representing valid but complex feature combinations rather than truly spurious correlations?\n2. How does the method handle cases where features might be spurious in some contexts but valid in others?\n3. (Also related to 1.) There has been evidence that neurons may learn polysemantic features. What is the impact of LaSAR in case neurons may learn linear combinations of spurious and core features?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Key Strengths:\n\n1. Spurious neuron identification: The proposed LaSAR framework introduces an interesting approach to identify spurious neurons using activation patterns and prediction outcomes, providing a self-guided mechanism for bias detection.\n\n2. Practical Utility: The method works as a post-hoc tool in standard ERM training settings, making it highly practical for real-world applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Last-layer Selective Activation Retraining (LaSAR), which identifies and mitigates spurious bias without requiring external supervision or group labels. The key point lies in observing that neuron activations combined with prediction outcomes can self-identify spurious features, and then using this information to selectively block spurious neurons during last-layer retraining. The method works as a practical post-hoc tool in standard ERM training settings, and requires no additional annotations beyond class labels. Authors compare their method with competitive baselines such as JTT, and DFR and show some improvement in worst group accuracy on a benchmark with 4 datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The contribution of this paper is severely limited. Indeed, the core intuition that using (i) misclassified examples of validation data, (ii) and retraining all layers or the linear head to reduce reliance on spurious features has been demonstrated previously with methods such as JTT and AFR. How is LaSAR fundamentally different from AFR? \n\n2. Lack of fair comparison. Although JTT and AFR need group information on the validation data only to tune hyper-parameters. They can be tuned using the worst-class accuracy. Authors should therefore compare their method with JTT and AFR when tuned on worst-class accuracy.\n\n3. No theoretical guarantees are provided about the convergence and stability of the selective activation retraining process, even on synthetic data." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1, why did you define the spuriousness score as (5)? To help readers understand the behind rationale, I think the authors may need to add more explanation.\n\n2, In line 157, the authors mentioned that WGA is the accuracy on the worst performing data group in the test set $\\mathcal{D}_{test}$. However, they used argmax in the formula of WGA, it seems problematic since argmax will output a group label rather than the value of accuracy and the argmax will output the best performing data group in terms of accuracy rather than the worst performing data group. \n\n3, In Section 3.2, they used a synthetic motivating example. It may be better to use a real motivating example.\n\n4, In line 321, the authors may want to say \"equation (6) and equation (7)\" rather than equation 6 and equation 7.\n\n5, I think the study objective in this paper is quite similar to variable selection in statistics. We can use many penalties such as L1 penalty to remove those spurious features. I do not see the advantages of the proposed method compared with those variable selection methods in statistics. The authors may need to discuss this point." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The authors did extensive experiments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors propose a novel method to self-identify spurious features and mitigate the spurious bias by retraining the last classification layer. In general, the idea of using neuron activations before the last classification layer, coupled with their final prediction outcomes, to provide self-identifying information on whether the neurons represent spurious features seems interesting." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The writing in some places is unclear, in particular, they did not clearly explain the behind reasoning of the proposed method to identify the spurious features. The did not use some theoretical results to support the proposed method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Could also provide results in terms of balanced accuracy (other than accuracy and WGA)? \n- I think that retraining on a part of the validation set may lead to \"unfair\" comparison with baseline methods that are not trained also on validation data\n- Reweighting the classification layer essentially does not \"remove\" the bias from the whole model, but it is just a correction. Do you think this might be an issue in certain cases?\n- I do not see a clear difference between core activation maps and spurious activations maps for CelebA in Fig. 4., the spurious heatmaps even seem a bit more focused on the hair (which is the target task). \n- Using your method do you think it would be possible to provide pseudo-labels for the training data in order to use a supervised debiasing method?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper tackles a very important issue, which is learning unbiased models from biased data. \n- The proposed method does not need any kind of annotation on the bias, and it just leverages the class label (unsupervised debiasing)\n- The reported results show improvement w.r.t other methods" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this work, authors propose a debiasing method that works by retraining only the last layer (classification layer) in order to re-weight different factors in the latent representation. The assumption is that latent factors carry different information (source, spurious, noise) which can be effectively filtered out from the classification layer by reweighting. They test their method on standard debiasing benchmarks such as celeba, waterbirds, multinli and civil comments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Here are my main concerns about the work: \n\n- The authors' assumption is that latent representation can be factorized in source, spurious and noise components. This is clearly shown in the toy example; however it is not clear why this should also happen in representations extracted from deep neural networks on complex data. It might not be so simple to factor out single components in the learned representations, as they might be intertwined and correlated. Can you provide some more theoretical backing of this method? \n\n- I think that validation on more difficult datasets such as 9-Class ImageNet / ImageNet-A (https://openreview.net/forum?id=2OqZZAqxnn) should be added to the experimental validation.\n\n- The related work section should be updated a bit with relevant works in the area (e.g. [1-6])\n\n[1] Bahng, Hyojin, et al. \"Learning de-biased representations with biased representations.\" International Conference on Machine Learning. PMLR, 2020.\n\n[2] Tartaglione, Enzo, et al. \"End: Entangling and disentangling deep representations for bias correction.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021. \n\n[3] Y.-K. Zhang, Q.-W. Wang, D.-C. Zhan, and H.-J. Ye, “Learning debiased representations via conditional attribute interpolation” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023.\n\n[4] Barbano, Carlo Alberto, et al. \"Unbiased Supervised Contrastive Learning.\" ICLR. 2023.\n\n[5] Zhang, Yi, et al. \"Poisoning for Debiasing: Fair Recognition via Eliminating Bias Uncovered in Data Poisoning.\" ACM Multimedia 2024. 2024.\n\n[6] Wang, Yining, et al. \"Navigate Beyond Shortcuts: Debiased Learning through the Lens of Neural Collapse.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Some minor concerns.\n- I don't understand, why not provide visualizations using linear dimensionality reduction for the motivating example (section 3.2), since you are using a linear model. Using T-SNE somehow confuses the example\n- How were the baselines implemented? are they openly available (it could make sense to provide the links) or did you re-implemented them\n- How many spurious features were masked away in each of the examples" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper deals with a relevant problem. It proposes a new method that is, to the best of my knowledge original. And presents an evaluation in relevant benchmarks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a new method for mitigating spurious bias in an unsupervised fashion. \n\nIt tries to detect non-essential features based on the pattern of errors, mask them away, and retrain the last layers. The method is compared against other methods in image and text datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My main concern is that I find the motivation for the method not very strong. I believe the authors don't provide strong evidence for the main assumptions that motivate the methods.\n\nParticularly, core assumptions for the method are that\n1. some features in the latent embedding that are responsible for encoding the *spurious correlation are confined to single neurons*, and can be masked away. It is a bit unclear to me whether this is true. For instance, maybe some component that is not entirely aligned with any specific neuron could be responsible for encoding this spurious feature.\n2. spurious features can be distinguished from core features by looking at the error density. And while the toy example motivates this, It seems the pattern we see in Fig2(b) for spurious vs core features is very different from what we see in Fig 4.\nOverall, I think these are two very important assumptions of the method that should be more clearly demonstrated" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024mitigating,\ntitle={Mitigating Spurious Bias with Last-Layer Selective Activation Retraining},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=x9rtYetTsA},\nnote={under review}\n}" }, "abstract": { "value": "Deep neural networks trained with standard empirical risk minimization (ERM) tend to exploit the spurious correlations between non-essential features and classes for predictions. For example, models might identify an object using its frequently co-occurring background, leading to poor performance on data lacking the correlation. Last-layer retraining approaches the problem of over-reliance on spurious correlations by adjusting the weights of the final classification layer. The success of this technique provides an appealing alternative to the problem by focusing on the improper weighting on neuron activations developed during training. However, annotations on spurious correlations are needed to guide the weight adjustment. In this paper, for the first time, we demonstrate that neuron activations, coupled with their final prediction outcomes, provide self-identifying information on whether the neurons represent spurious features. Using this information, we propose last-layer selective activation retraining, which retrains the last classification layer while selectively blocking neurons that are identified as spurious. In this way, we promote the model to discover robust decision rules beyond spurious correlations. Our method works in a classic ERM training setting where no additional annotations beyond class labels are available, making it a practical post-hoc tool for improving a model's robustness to spurious correlations. We demonstrate that our method is effective with different model architectures and can effectively mitigate spurious bias on different data modalities without requiring annotations of spurious correlations in data." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "spurious correlation", "robustness", "classification" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b835717ce94880a162b94d1b821fd3245dd64c38.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Mitigating Spurious Bias with Last-Layer Selective Activation Retraining" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xA8WW2dlTX
ICDA: Interactive Causal Discovery through Large Language Model Agents
main
Active
Causal Discovery;LLM;Black box optimizer
causal reasoning
3;3;5;5;6
2;4;3;3;3
2;2;2;3;3
2;2;2;2;3
3;2;3;3;2
4.4
3
2.4
2.2
2.6
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Some of the figures done have confidence bound (last subplot for Fig 6). Am I missing something?\n\n- What does \"simplicity\" mean on L138?\n\n- Some works suggest that LLMs confidence might be unreliable, I wonder what the intuition on much better results with ICDA is in comparison to random agents?\n\nMinor:\n\n- Figures might benefit from increasing the font size.\n\n- a weird indent on L254" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper introduces a novel application of LLMs for causal discovery - using LLM defined interventions to refine causal discovery.\n\n- ICDA is evaluated on diverse datasets including a dataset not part of the model pertaining.\n\n- The paper is well-written and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed Interactive Causal Discovery Agent (ICDA), that uses LLMs for causal discovery through an uncertainty-driven edge intervention selection process. The method prioritizes uncertain edges for intervention and utilizes local updates from feedback, achieving strong performance on a range of real-world causal graphs. Extensive experiments validate ICDA’s robustness and adaptability, showing it outperforms zero-shot LLM prompting across diverse graph structures." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Lack of comparison with statistical methods.\n\n- There is a lack of comprehensive results across models. I acknowledge the authors have presented results in Figure 6, but comparing different ICDA variants and random agents for smaller models would have been interesting." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The plots in Figures 2 and 4 are very small and hard to read.\n2. In line 312 there seems to be a space missing - “weablate”\n3. The citation (Sharma & Kiciman, 2020), in line 147 seems misplaced. What was the authors' intention?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is cleanly written. \n2. The approach is well motivated by the literature. \n3. The experimental section is extensive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a new method for end-to-end interactive causal discovery using LLMs.\nThe approach comprises two main components intervention selection method based on LLM uncertainty predictions and local update strategy based on newly acquired knowledge. \nThe approach is based on a formulation of edge intervention. The method is evaluated on the set of 7 real-world graphs and compared against its ablations. Additional analysis is provided which covers evaluation with different LLM models and evaluation on the graph unseen during LLM training." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The definition of edge intervention feels unrealistic. Could the authors please provide an example of a causal operation that reveals the edge without additional knowledge or assumptions about the graph structure? It seems to me that such data might be extremely costly to obtain and the operation might in some cases be equivalent to revealing the whole graph, thus making the described approach impractical.\n2. The paper lacks a discussion about the limitations of the proposed approach." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The setting may not be realistic:\n- In the proposed setting, line 144, it is assumed that one could directly obtain the ground-truth causal edge label in each intervention, which is not realistic;\n- The setting significantly differs from the standard practice in the literature of experimental design [1,2,3];\n\n2. There is no guarantee that LLMs could provide valid results:\n- It is widely shown that LLMs can not provide faithful causal results [4,5], while the proposed framework heavily rely on the results of LLMs;\n- Similarly, the uncertainty provided by LLMs is not warranted;\n\n3. Previous baselines on experimental design are neglected, for example, previous works on intervention selection or experimental design [1,2,3].\n\nMinor\n- The line numbers of the algorithm are all 0; \n- lots of key steps in the algorithm are not defined;\n\n**Refereneces**\n\n[1] Learning neural causal models with active interventions.\n\n[2] Trust your $\\nabla$: Gradient-based intervention targeting for causal discovery.\n\n[3] Active learning for optimal intervention design in causal models.\n\n[4] Causal parrots: Large language models may talk causality but are not causal.\n \n[5] Discovery of the hidden world with large language models." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "(+) This work presents an interesting use of LLMs in causal discovery;\n\n(+) The presentation and organization of this work are clear and easy-to-follow;\n\n(+) Some experiments demonstrate the effectiveness of the proposed approach;" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work studies using LLMs to perform causal discovery in an interactive manner. The authors propose to incorporate LLMs as an agent to produce initial graphs, and iteratively optimize the updated causal graphs by selecting proper interventions. During the selection of the intervention targets, LLMs are leveraged to provide uncertainty measures for the unknown edge. The authors show that the proposed approach can effectively outperforms simple baselines on eight real-world causal graphs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(-) The setting may not be realistic, since it is challenging to directly obtain the ground-truth causal edge label in each intervention;\n\n(-) There is no guarantee that LLMs could provide valid results;\n\n(-) Previous baselines on experimental design are neglected;" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Can you elaborate on :\na) how your algorithm satisfies the LLMs as optimizers framework? ;\nb) why using the F1 metric specifically as a loss?,\nc) why these specific updates on parents of intervened edges as the choice of local updates?,\nd) how exactly the intervention is performed, at least in experiments?\n\n- l.94-95 : *Building on Meek (2013), Chickering (2002) proposes a greedy search algorithm that performs well in practice.* There seems to be a confusion in time here... wrong Google Scholar citation?\n\n- Can you increase the font of Figure 2?\n\n- l.403-404 : *Additionally, we notefFor large enough graphs, putting everything in context is simply not feasbile*. Typo? We note that for?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Originalty : The submissions brings an interesting perspective by making use of the use of the literature on LLMs as optimizers.\n\nQuality : the experiments are extensive and exhaustive, performing multiple ablation studies on the model's properties but also on aspects such as memorization.\n\nClarity : the paper is mostly clear in my opinion.\n\nSignifiance : the experiments are interesting as they underline the importance of finding a subtle balance wrt local updates, between throwing the whole graph into the prompt and only modifying intervened edges." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper builds on previous literature on using LLMs for causal discovery on one side, and for active black-box function optimization on the other side, to iteratively update a graph using ground-truth edges obtained through interventions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Originalty/Signifiance/Quality : this point harms the correctness of the claims of the paper and is my main concern : it seems like the iterative updates do not satisfy the framework of LLMs as optimizers. From my understanding of the submission and the references, this frameworks consists in having the LLM decide on next points in the admissible space to query based on former (point, function realization) couples. But here, the next edges to query are done in a pre-determined, algorithmic manner, based on confidences, and the objective (the F1 score) is not used as an objective to optimize and is never parsed to the LLMs. The LLM is simply used on a post-hoc manner *after* having queried edges and read their associated output ground-truth labels.\n\nClarity : there are a few unclear points or mistakes developed in the questions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Refer to weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "(1) Introducing LLMs to study causal discovery is an interesting direction.\n\n(2) The author's writing is clear, making it easy to read and understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper applies large language models (LLMs) to causal reasoning. Specifically, the authors prompt LLMs to address two key tasks: (1) selecting which edge to intervene on in the next round, and (2) updating the predicted causal graph. The authors demonstrate that their approach significantly outperforms a random selection baseline across eight different real-world graphs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(1) The experimental setup is quite simple, comparing only three basic methods: random selection, direct LLM, and static confidence selection.\n\n(2) Additionally, the comparison should include the performance of different language models, not just one.\n\n(3) In the main experimental section, it would be better to include a table for quantitative results alongside the graphs.\n\n(4) The experimental setup is overly simplistic, and for a conference like ICLR, the complexity of the method, theoretical analysis, and experimental thoroughness are insufficient." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We utilize LLMs as a black box optimizer to iteratively propose and update interventions on a causal graph" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024icda,\ntitle={{ICDA}: Interactive Causal Discovery through Large Language Model Agents},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xA8WW2dlTX},\nnote={under review}\n}" }, "abstract": { "value": "Large language models (\\textbf{LLMs}) have emerged as a powerful method for causal discovery. Instead of utilizing numerical observational data, LLMs utilize associated variable \\textit{semantic metadata} to predict causal relationships. Simultaneously, LLMs demonstrate impressive abilities to act as black-box optimizers when given an objective $f$ and sequence of trials. We study LLMs at the intersection of these two capabilities by applying LLMs to the task of \\textit{interactive causal discovery}: given a budget of $I$ edge interventions over $R$ rounds, minimize the distance between the ground truth causal graph $G^*$ and the predicted graph $\\hat{G}_R$ at the end of the $R$-th round. We propose an LLM-based pipeline incorporating two key components: 1) an LLM uncertainty-driven method for edge intervention selection 2) a local graph update strategy utilizing binary feedback from interventions to improve predictions for non-intervened neighboring edges. Experiments on eight different real-world graphs show our approach significantly outperforms a random selection baseline: at times by up to 0.5 absolute F1 score. Further we conduct a rigorous series of ablations dissecting the impact of each component of the pipeline. Finally, to assess the impact of memorization, we apply our interactive causal discovery strategy to a complex, new (as of July 2024) causal graph on protein transcription factors. Overall, our results show LLM driven uncertainy based edge selection with local updates performs strongly and robustly across a diverse set of real-world graphs." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Causal Discovery", "LLM", "Black box optimizer" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d1d07828e7ac3cecb646de94b04c76ba958f2e9c.pdf" }, "presentation": null, "primary_area": { "value": "causal reasoning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "ICDA: Interactive Causal Discovery through Large Language Model Agents" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xAM9VaXZnY
What Can We Learn from State Space Models for Machine Learning on Graphs?
main
Active
Graph neural networks;state space models
learning on graphs and other geometries & topologies
3;5;5;6
4;4;4;3
3;2;2;2
2;2;2;3
2;3;2;3
4.75
3.75
2.25
2.25
2.5
-0.662266
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Please address the aforementioned concerns\n2. Some typos: \n - Equation 5 (line 201): the first term doesn’t require parentheses. \n - “reply on” -> rely on in line 268 and 346. \n - There are duplicate references for Behrouz & Hashemi, 2024 \n\n\nConclusion:\n\nWhile the proposed idea is appealing, and I acknowledge its potential impact, I am not sure that the paper is ready for ICLR in its current form. Therefore, I am hesitant to fully support acceptance but I am willing to increase my score if the concerns and questions are addressed." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper provide a method to customize the recursion in the SSMs for graphs,\n- It proposes a method for designing a permutation-invariant kernel for efficient global convolution operations on graphs.\n- The paper demonstrates provably stronger expressiveness of the proposed model for 3-paths and 3-cycles and also for 4-paths and 4-cycles." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work extends the state space models (SSMs) from sequence modeling to the domain of graph-structured data. By tailoring SSMs for graphs, the proposed model (GSSC)\ncan capture long-range dependencies and overcome the limitations of Message Passing Neural Networks (MPNNs) while offering efficient computation addressing the quadratic computation of the graph transformers. To preserve the permutation equivariance in the graphs, it outlines a method for designing a permutation-invariant kernel for convolution operations on graphs. Furthermore, it extends to a data-dependent version of the proposed model by defining a selection mechanism for graph-structured data. The proposed model demonstrates provably stronger expressiveness than MPNNs and Graph Spectral Convolution in counting graph substructures." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The distinction between the proposed model (equation 4) and linear graph attention is minimal. It is known, even prior to Dao & Gu (2024), that SSMs can be represented as linear attention. The global convolution representation used here is also an equivalent form of SSMs. Additionally, the use of positional encodings is not novel, as it is a feature used by other works such as GraphGPS, which propose a general framework for transformer-based models and can adopt their approximations like Performer and linear attention. \n\n2. The presentation requires improvement, and some claims need to be more precise. For example: \n - a. at line 143, the *Computational efficiency and parallelism*, The text should clarify that SSMs like S4, when represented as global convolution (equation 2), they can leverage the FFT algorithm to result in parallel and efficient quasi-linear complexity. Parallel scan is used for the recurrence form of equation (1) under certain conditions (employed by data-dependent SSMs like Mamba), which also results in O(n log n) computation but offers parallelization. \n - b. The statement that the kernel should be factorizable as a dot product to be permutation-invariant (line 127), needs revision. The dot product between absolute position representations is one way to achieve translation invariance. Reference [2] offers alternative methods, like cross-correlation, to achieve translation invariance in kernels for global convolution using translation-equivariant functions. \n - c. The factorized form in line 259 is not equivalent to the data-dependent convolution presented in the preceding line. \n - d. The new positional encodings in equation (6) are positional encodings of all nodes but utilize only the features of node u (as it is a function of $x_u$ only). Lines 267 and 267 need correction to reflect this. \n - e.The text should elaborate on how $\\phi()$ in in eqn (5) is modeled to be a permutation equivariant function and capture interactions between frequencies.\n\n\n\n\n3. **Insufficient Empirical Studies**: The literature review lacks citations for many state-of-the-art models, and the experimental section lacks comparisons to them. \n - Since the proposed model is compared with GSC and Linear Graph Transformers in the paper, it is essential to include their performance comparison in the experimental section (particularly in Table 2 for graph substructure counting, where the proposed model is expected to show superior expressiveness). \n - Comparisons with newer models like Spatial-Spectral GNN [1] and Polynormer [3] are also necessary. \n - Additionally, comparisons against other SSM-based models (Graph-Mamba I and II), GSC, Linear Graph Transformers and recent models in Tables 2 and 3 are important to demonstrate the proposed model empirically." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "(i) For the scalable version of the architecture when you proposed using the first k eigenvectors, how does the method address the issue of missing eigenvectors in the middle and end of the spectrum?\n\n(ii) What are the main challenges in extending GSSC to heterophilic graphs, and do you have any preliminary insights on how these challenges could be addressed?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "I have found the paper well written and self-contained. I think a non-expert could find most of the information in the paper, and I appreciate this aspect.\n\n Figures 1 and 2, which demonstrate the problem domain and architecture, are interesting and easy to read. I commend the authors for their explicit effort in making these illustrations clear and informative.\n\nThe insights are didactical and well communicated. The conclusion given by the experiments looks interesting and valuable for future practitioners, while I think a synthesis would be beneficial for the reader." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces the Graph State Space Convolution (GSSC) method, an extension of State Space Models (SSMs) to graph data. GSSC utilizes global permutation-equivariant set aggregation and factorizable graph kernels based on relative node distances as its convolution kernels." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Despite these merits, I have the following concerns about the paper.\n\n1- While there is a careful analysis of the different design decisions/performance tradeoffs, I feel that there is only a limited understanding about what are the properties of the Architecture that lead to these decisions/performance differences.\n\n2- Scalability Concerns: The paper acknowledges the challenge of scalability for larger graphs. To address this, the authors could explore methods for optimizing the computational complexity of the GSSC architecture.\n\n3- Weak experimental study: The paper lacks experimental evaluation of the Graph State Space Convolution (GSSC) method on heterophilic datasets. This omission is significant as such studies are crucial to assess GSSC's performance with heterophilic data and its robustness against issues like over-squashing and over-smoothing, which are common challenges in graph data analysis." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Compared to other graph SSM works, this study indeed offers a novel perspective. In your opinion, what advantages does this new viewpoint provide over other methods? How do your experiments support these advantages?\n\n2. The title is \"What Can We Learn from State Space Models for Machine Learning on Graphs.\" Could you elaborate a bit more on other graph SSM models / SSM model on other domins, and compare them with GSSC?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. **Novelty**: Unlike previous methods that design SSM variants on graphs, this paper approaches the problem from the perspective of separable distance kernels on graphs. This idea is both novel and interesting.\n\n2. **Comprehensive Evaluations**: The experimental section of this paper covers synthetic datasets, real datasets, and computational efficiency benchmarks, providing a comprehensive evaluation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Machine learning on graphs often uses Message Passing Neural Networks (MPNNs), but these have limited expressive power. Graph State Space Convolution (GSSC) is proposed as a new method that extends State Space Models (SSMs) to graph data, overcoming these limitations and demonstrating superior performance on benchmark datasets. GSSC achieves good results on many datasets and offers a scalable solution for graph machine learning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Experimental Design**: My main concern is that the model evaluated in the experiments is a hybrid of GSSC and MPNN, without showing the performance of GSSC alone. This makes it difficult to assess the independent contribution of GSSC, affecting the judgment of its effectiveness. I strongly recommend that the authors provide experimental results for the GSSC module alone, which will help readers understand the primary contributions of GSSC.\n\n2. **Baseline Selection**: In Section 5.1, the authors compare only with MPNN, without providing baseline comparisons with SSM or GT, which seems insufficient. Considering the authors claim that GSSC is a replacement module for Transformers, I suggest adding comparisons with other models that include shortest path information (such as high-order MPNN) or global information (such as Graph Transformer). Moreover, the current baselines are mainly single MPNN models, and comparing them with the hybrid model (GSSC+MPNN) seems unfair.\n\n3. **Inconsistency in Model Architecture**: In Section 5.1, the authors use selective GSSC, but it is not used in other benchmarks.If different architectural variants of the model are used in the experiments, I advise clearly indicating this in the tables. Additionally, the authors claim that GSSC without selective is already powerful enough, does this imply that using selective GSSC would be better? If so, it is recommended to provide relevant experimental results to support this argument. If selective GSSC is not used due to other disadvantages (such as computational efficiency), it would be beneficial to discuss this further in the paper. Generally, a unified model architecture is more attractive than one that requires adjustments across different benchmarks." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper achieves competitive performance compared to SOTA methods.\n2. The paper proposes a novel approach in extending state-space models to graphs.\n3. The paper provides a plausible framework for capturing long-range dependencies efficiently." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes Graph State Space Convolution (GSSC), an extension of state-space models to graph-structured data. The authors emphasize GSSC’s capability for capturing long-range dependencies in linear time and demonstrate competitive performance across several benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My main concerns relate to the complexity claims, specifically:\n1. **Layer Complexity vs Expressivity:** The paper states that the complexity of a GSSC layer is “… $O(nmd)$


 where $n$ is the number of nodes and $m$, $d$ are hidden and positional encoding dimension.” (239-240) This means that a GSSC layer has $O(|V|)$ complexity


. Consequently, GSSC is in general *incapable of examining every edge in the graph*, unlike MPNNs with $O(|V|+|E|)$ complexity, such as GINE [1]. Although the authors prove that GSSC is “more powerful than MPNNs” *in terms of the WL hierarchy*, the expressivity implications of not being able to examine every edge seems to be overlooked. Even preprocessing the graph to incorporate edge features into nodes, which itself requires $O(|E|)$ time, would still necessitate $m\\in O(\\frac{|E|}{|V|})$ to store these features without information loss, thus exceeding $O(|V|)$ complexity overall.\n2. **Preprocessing Complexity:** The paper claims that finding the top $d$ eigenpairs with Lanczos methods has $O(nd^2)$ complexity (286-287). However, since sparse matrix-vector multiplication with the Laplacian matrix is necessary for Lanczos methods, the complexity per iteration would be at least $O(|E|)$, resulting in an overall complexity of at least $O(d|E|)$ to find $d$ eigenpairs. This exceeds the paper’s claim of $O(nd^2)$ preprocessing, and thus requires clarification.\n\nDespite these concerns, the paper’s main contributions remain valid. I would appreciate if the authors could clarify these points during the rebuttal phase.\n\n[1] https://arxiv.org/abs/1905.12265" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024what,\ntitle={What Can We Learn from State Space Models for Machine Learning on Graphs?},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xAM9VaXZnY},\nnote={under review}\n}" }, "abstract": { "value": "Machine learning on graphs has recently found extensive applications across domains. However, the commonly used Message Passing Neural Networks (MPNNs) suffer from limited expressive power and struggle to capture long-range dependencies. Graph transformers offer a strong alternative due to their global attention mechanism, but they come with great computational overheads, especially for large graphs. In recent years, State Space Models (SSMs) have emerged as a compelling approach to replace full attention in transformers to model sequential data. It blends the strengths of RNNs and CNNs, offering a) efficient computation, b) the ability to capture long-range dependencies, and c) good generalization across sequences of various lengths. However, extending SSMs to graph-structured data presents unique challenges due to the lack of canonical node ordering in graphs. In this work, we propose Graph State Space Convolution (GSSC) as a principled extension of SSMs to graph-structured data. By leveraging global permutation-equivariant set aggregation and factorizable graph kernels that rely on relative node distances as the convolution kernels, GSSC preserves all three advantages of SSMs. We demonstrate the provably stronger expressiveness of GSSC than MPNNs in counting graph substructures and show its effectiveness across 11 real-world, widely used benchmark datasets. GSSC achieves the best results on 6 out of 11 datasets with all significant improvements compared to the state-of-the-art baselines and second-best results on the other 5 datasets. Our findings highlight the potential of GSSC as a powerful and scalable model for graph machine learning. Anonymous code\nis available at https://anonymous.4open.science/r/GSSC-5ED8." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Graph neural networks", "state space models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/0604423a66dd548b730b8ce28d2373b8cc151aaf.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "What Can We Learn from State Space Models for Machine Learning on Graphs?" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xAYOfMV264
A Dual-Agent Adversarial Framework for Generalizable Reinforcement Learning
main
Active
Generalizable Reinforcement learning;Adversarial Learning
reinforcement learning
3;5;5;5;6
4;3;3;4;3
1;3;4;3;3
1;3;4;3;3
2;3;3;3;3
4.8
3.4
2.8
2.8
2.8
-0.666667
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "n/a" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* Will the theorem hold if $𝑀_{train}$ is unbounded? In RL, policies typically interact with the environment on the fly, allowing for the gathering of infinite samples.\n* Are the two characteristics discussed in Section 4.2 sufficient to ensure robust generalization performance? If not, what additional considerations could be relevant?\n* Are any weights needed in Equation 19? If not, why?\n* Would it be beneficial to consider heterogeneous encoders in the proposed method?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well-written and easy to follow, with an effective and engaging presentation. The paper begins with a theoretical analysis, systematically deriving the motivation and key designs, and ultimately showing good results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a dual-agent adversarial policy learning framework to address generalization gaps in reinforcement learning (RL). The authors first derive a lower bound on generalization performance, showing that optimizing this bound corresponds to constrained optimization at each RL step. They then leverage some approximations, leading to the dual-agent adversarial framework proposed in this paper. It employs two identical policy networks that are updated alternately to minimize reliance on irrelevant features through a combined loss function, which includes both the primary task loss and a new adversarial loss that includes both adversarial attacks of the other and robust defences of itself. Experiments on the ProcGen benchmark show that this approach outperforms PPO and DAAC baselines in generalization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My main concern is the evaluation. Although the performance is promising, the paper lacks in-depth analysis and extensive discussion on several aspects.\n\nCurrently, it seems that even without generalization, the proposed method is showing good performance. Thus it is unclear if it is due to better convergence or better generalization capability. We should be careful about this when drawing the conclusion that the proposed method has better generalization performance. And it would be helpful to add some experiments/discussion to compare only the generalization if the two baselines have similar in-distribution performance. Also, one needs to check if the comparison is fair or not. It would be helpful to report the wall clock time and number of gradient steps between the proposed method and the baseline.\n\nSecond, the approach is evaluated on only one generalization setup. However, it is unclear how challenging such a generalization setting is. It would be beneficial to conduct further analysis, assessing the degree to which the proposed algorithm enhances generalization across varying levels of difficulty, such as easy, moderate, and challenging generalization cases.\n\nThird, please also consider adding the training complexity compared to baseline methods, as well as a discussion on the impact of hyperparameter $\\alpha$.\n\nAdditionally, the authors may want to compare and discuss their method relative to other approaches aimed at enhancing RL generalization, such as [1][2]. Given that these methods may also align with the two characteristics described in Section 4.2, further elaboration on the distinctions or similarities would strengthen the paper.\n\n* [1] MaDi: Learning to Mask Distractions for Generalization in Visual Deep Reinforcement Learning\n* [2] Policy Rehearsing: Training Generalizable Policies for Reinforcement Learning\n\nIn Section 2, the introduction of the concept of MDP state semantics and the subscript $𝑚$ in the MDP notation is not well-motivated or clearly explained. Consider improving clarity by first defining the distribution $𝑝𝑀$ explicitly and then introducing $m$. Furthermore, the limitations and future works should be discussed in the paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- One claim is that the method can be widely used with a variety of algorithms. Would you be able to share results on how this approach transfers with different Online RL algorithms in the ProcGen benchmark?\n- Would you be able to provide some qualitative analysis of the representations learned by your framework compared with those of PPO and DACC to further validate the claim that robust representations are being learned?\n- Could you share results of the sensitivity of $\\alpha$ and the selection criterion for it?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Empirically show a significant improvement over prior work in the ProcGen environment with their adverserial learning framwork\n- Provide theoretical insights about how a policy's robustness to irrelevant features improves generalization performance which is a novel contribution that can be generally applied to any algorithm." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel adverserial learning framework that involves a minimax game process between two homogeneous agents to improve the generalization capability of these agents in RL. This framework integrates with existing RL algorithms such as PPO, leverages no additional human prior knowledge which can lead to poor robustness in generalization and has minimal hyperparameters allowing for effective applicability. The authors additionally derive lower bounds for the training and generalization performance of the agent and show that by minimizing the policy's robustness to irrelevant features, one can improve generalization performance. The authors evaluate their framework in the ProcGen environment, showing gains over algorithms such as PPO and DACC." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Several works such as DRAC and RARL consider a multi-agent/adversarial optimization process in RL. Would be good to include an extensive evaluation of these approaches as baselines and contextualize the novelty of your approach with respect to each baseline.\n- The method is primarily evaluated in the ProcGen environment and could benefit from additional empirical evaluation with a larger set of RL benchmarks to further evaluate the efficacy of the approach.\n- GANs and other adversarial optimization techniques commonly have issues with mode collapse, vanishing gradients and convergence issues, which all make optimization more difficult. Though this is controlled with the parameter $\\alpha$, would be good to consider the tradeoff of the robustness to adversarial threats and the performance of the agent." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the Weakness" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. **Solid Theoretical Background**: The approach is backed by strong theory, clearly explaining how it supports RL generalization.\n2. **No Human Bias in Addressing Generalization**: The method achieves generalization without relying on human biases, such as hand-designed augmentations.\n3. **No Extra Network Parameters**: The framework achieves its goals without adding network parameters, relying on just one hyperparameter for flexibility.\n4. **Novel Idea**: The dual-agent adversarial setup is an innovative way to tackle RL generalization.\n5. **Strong Performance**: The approach performs well across tested environments, demonstrating robust generalization and effectiveness." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a dual-agent adversarial framework to improve generalization in reinforcement learning (RL). In this setup, two agents interact adversarially, each attempting to disrupt the other's policy while maintaining stability in its own. This competition drives both agents to develop robust and generalizable strategies. The framework is efficient, adding only one hyperparameter, and shows strong performance improvements in challenging environments, especially when used with standard RL algorithms like PPO. This approach offers a promising solution for enhancing RL generalization without relying on complex data augmentations or human-designed biases." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Limited Environments and Baselines**: Testing is somewhat limited in environments and baseline comparisons. Adding diverse environments, such as DMC-GB[1], and competitive baselines like PIE-G[2], SVEA[3], and ARPO[4] would provide a more complete comparison and further demonstrate the model's capabilities.\n\n[1] Generalization in reinforcement learning by soft data augmentation., Hansen et al., ICRA 2021\n\n[2] Pre-Trained Image Encoder for Generalizable Visual Reinforcement Learning., Yuan et al., NeurIPS 2022\n\n[3] Stabilizing deep q-learning with convnets and vision transformers under data augmentation., Hansen et al., NeurIPS 2021\n\n[4] Adversarial Style Transfer for Robust Policy Optimization in Deep Reinforcement Learning., Rahman et al., ArXiv 2023" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "Generalization in deep reinforcement learning is a highly important research direction." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The submission proposes an adversarial framework that involves a game process between two agents: each agent seeks to maximize the impact of perturbing the opponent’s policy by producing representation differences for the same state, while maintaining its own stability against such perturbations. The submission conducts experiments in the ProcGen environment with 3 random seeds in 8 different games and provides comparison against DAAC and PPO." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The theoretical claims of the submission follow almost immediately from previous work, and do not bring any additional new knowledge. \n\nTable 1 should include standard deviations. Three random seeds is relatively small to interpret the results reported. The results reported in Table 1 and Figure 5 are contradictory. Table 1 reports that DAAC in climber is 3.299 and PPO + Adv. (Agent 1) is 4.473. However, Figure 5 clearly reports that DAAC performance as the highest. How is this possible?\n\nWhy is it only compared to DAAC and original PPO? There are more studies on generalization in deep reinforcement learning. \n\nIn the DAAC paper there is another algorithm called IDAAC that performs better. Why is the algorithm IDAAC not included in the comparison?\n\nI would also recommend checking page 9 and page 10 of the ProcGen section of paper [1]. In particular, the paper [1] states for ProcGen that: \n\n*“We note that a number of improvements reported in the existing literature are only 50 − 70% likely.”*\n\nFurthermore the paper [1] states:\n\n*“Instead, we recommend using normalization based on the estimated minimum and maximum scores on ProcGen and reporting aggregate metrics based on such score.”*\n\nAs it has been reported in [1] and [3], the performance of PPG [2] is also quite high. It might be good to include PPG in the comparison baseline.\n\nProcGen seems to have 16 tasks. Both of these papers [1,2] test across the 16 games in the ProcGen environment. The submission tests their proposed algorithm in only 8 of them.\n\n[1] Deep Reinforcement Learning at the Edge of the Statistical Precipice, NeurIPS 2021.\n\n[2] Phasic Policy Gradient, ICML 2021.\n\n[3] Decoupling Value and Policy for Generalization in Reinforcement Learning, ICML 2021.\n\nMore recent techniques report substantially higher scores in the ProcGen environment [1,2].\n\n[1] DRIBO: Robust Deep Reinforcement Learning via Multi-View Information Bottleneck, ICML 2022.\n\n[2] Explore to Generalize in Zero-Shot RL, NeurIPS 2023.\n\n\nHow adversarial learning is mentioned in the introduction is incorrect. In the introduction it is stated that: \n\n*“Adversarial framework facilitates the development of agents capable of adapting to new environments by emphasizing the distinction between relevant and irrelevant information.”*\n\nby referring to these studies [1,2,3] as adversarial learning \n\n[1] State-Adversarial DQN for robust deep reinforcement learning, NeurIPS 2020.\n\n[2] Robust adversarial reinforcement learning, ICML 2017.\n\n[3] Robust Deep Reinforcement Learning through Adversarial Loss, NeurIPS 2021.\n\nHowever, recent studies demonstrated that in fact adversarially trained policies cannot generalize, and furthermore the generalization skills of standard reinforcement learning training is substantially higher [1]. \n\n[1] Adversarial Robust Deep Reinforcement Learning Requires Redefining Robustness, AAAI 2023.\n\n\nSince the submission proposes an adversarial training method it would have been good to test against adversarial examples as well. It does not have to be the most state-of-the-art adversarial attacks, but still it would have been good to include for reference.\n\n\nAnother thing I want to mention is that by employing the proposed adversarial learning framework the number of encoder parameters that needs to be trained is in fact doubled. This brings a new set of questions. Is it really a fair comparison to lower capacity models as previous ones? Would the prior methods perform also well if we simply just increased the parameters in the encoder?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How does the adversarial training framework handle environments where the distinction between relevant and irrelevant features is not well-defined or context-dependent?\n- What strategies could be employed to reduce the computational overhead introduced by the dual-agent setup, especially in more complex or resource-constrained environments?\n- Could the method be extended or adapted to improve generalization in real-world tasks beyond the Procgen benchmark, and what modifications would be necessary to achieve this?\n- What impact would the framework have in environments with continuous action spaces or higher-dimensional state representations, where irrelevant features may be harder to isolate?\n- How would performance vary in scenarios where adversarial training results in catastrophic forgetting of useful features, and what mechanisms could prevent this?\n- Are there any considerations for applying this approach to tasks with dynamic or evolving feature relevance, where the set of relevant features may change over time?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The introduction of a dual-agent adversarial framework is an innovative approach that addresses the pressing issue of overfitting and generalization in reinforcement learning, offering a new perspective on how agents can improve adaptability in varying environments.\n- The paper provides a strong theoretical foundation, proving that reducing an agent’s robustness to irrelevant features can lead to better generalization, enhancing the depth of the contribution.\n- The experiments conducted on the Procgen benchmark show significant performance improvements over existing methods like PPO, demonstrating the effectiveness of the proposed framework in real-world, challenging tasks.\n- By focusing on reducing overfitting and enhancing generalization, the paper addresses a critical gap in reinforcement learning research, providing solutions applicable to broader, more complex environments.\n- The framework is well-designed to scale across different environments, making it applicable to a wide range of RL tasks with high-dimensional observations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a dual-agent adversarial framework aimed at improving the generalization capabilities of reinforcement learning (RL) models, which often struggle with overfitting and fail to adapt to minor variations in tasks. The proposed framework facilitates a game process between two agents that learn to perturb each other’s policies while maintaining their own stability, enabling them to focus on relevant features in high-dimensional observations. Extensive experiments on the Procgen benchmark demonstrate that this adversarial approach significantly enhances the agents’ performance, especially in challenging environments, outperforming traditional RL algorithms like Proximal Policy Optimization (PPO). Additionally, the authors theoretically prove that reducing an agent’s robustness to irrelevant features can improve its generalization performance. Overall, the study marks a significant advancement in addressing generalization challenges in deep reinforcement learning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While the framework performs well on the Procgen benchmark, its applicability to real-world tasks remains untested, leaving questions about how well it generalizes outside controlled environments.\n- The dual-agent adversarial framework introduces additional computational complexity, which may pose challenges in terms of scalability and efficiency for resource-constrained systems.\n- Although the framework is shown to improve generalization, more detailed ablation studies could have been included to clarify the contribution of individual components, such as the specific impact of the adversarial training mechanism.\n- The paper assumes that irrelevant features can be identified and suppressed, but it does not sufficiently address how to detect these features in environments where their classification is unclear or context-dependent.\n- The comparison with state-of-the-art methods is somewhat limited, with a stronger focus on performance gains rather than in-depth analysis of differences in behavior between approaches." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "The paper proposes a dual-agent adversarial framework for generalizable reinforcement learning." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024a,\ntitle={A Dual-Agent Adversarial Framework for Generalizable Reinforcement Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xAYOfMV264},\nnote={under review}\n}" }, "abstract": { "value": "Recently, empowered with the powerful capabilities of neural networks, reinforcement learning (RL) has successfully tackled numerous challenging tasks. However, while these models demonstrate enhanced decision-making abilities, they are increasingly prone to overfitting. For instance, a trained RL model often fails to generalize to even minor variations of the same task, such as a change in background color or other minor semantic differences. To address this issue, we propose a dual-agent adversarial policy learning framework, which allows agents to spontaneously learn the underlying semantics without introducing any human prior knowledge. Specifically, our framework involves a game process between two agents: each agent seeks to maximize the impact of perturbing on the opponent's policy by producing representation differences for the same state, while maintaining its own stability against such perturbations. This interaction encourages agents to learn generalizable policies, capable of handling irrelevant features from the high-dimensional observations. Extensive experimental results on the Procgen benchmark demonstrate that the adversarial process significantly improves the generalization performance of both agents, while also being applied to various RL algorithms, e.g., Proximal Policy Optimization (PPO). With the adversarial framework, the RL agent outperforms the baseline methods by a significant margin, especially in hard-level tasks, marking a significant step forward in the generalization capabilities of deep reinforcement learning." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Generalizable Reinforcement learning", "Adversarial Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c2da026386f692b9b177fe070b11cec1953ed308.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "A Dual-Agent Adversarial Framework for Generalizable Reinforcement Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xAZLCWbsTF
Revisiting Emergent Correspondence from Transformers for Self-supervised Multi-frame Depth Estimation
main
Active
Self-supervised Depth estimation; Multi-frame Depth estimation
applications to computer vision, audio, language, and other modalities
3;3;5;5
5;4;1;4
2;3;3;2
2;2;2;2
3;3;3;3
4
3.5
2.5
2
3
-0.666667
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I mentioned all comments including reasons and suggestions in the above sections. I recommend that the author will provide all the concerns, and improve the completeness of the paper. If the rebuttal period resolves the above-mentioned concerns, I will gladly raise my score. Also, there are little vague sentences and grammatical errors in the paper. I recommend that the author will revise the paper." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "First of all, the motivation for proposing a new model to address the limitations of existing epipolar-based methods in handling dynamic objects and the need for additional models is well-founded. \n\nWhile it may lack significant technical novelty, it shows a solid understanding of the operation of existing modules and effectively applies them to the target problem. More specifically, applying cross-attention mechanism for full cost volume calculation and incorporating MIM to improve matching similarity are highly appropriate choices.\n\nIn experimental section, the proposed method achieved SoTA results in depth estimation benchmarks compared with existing self-supervised multi-frame depth prediction methods, demonstrating its effectiveness." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors proposed a novel self-supervised multi-frame depth estimation architecture that incorporate the CRAFT module to compress and refine the cost volume through attention mechanism and feature aggregation. The main argument is that training cross-attention layers for image reconstruction facilitates the implicit learning of a warping function, resembling the explicit epipolar warping used in existing methods. Also, by employing masked image modeling for pre-training, the authors can successfully leverage the cross-attention map as a full cost volume for depth prediction in dynamic scenarios without requiring additional information such as camera pose. In experimental section, the authors demonstrated that the proposed method can outperform traditional methods utilizing epipolar-based cost volume in challenging scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "One concern is the validity of the proposed CRAFT method for better representation learning. While the full cost volume is well-represented through a clear understanding of the cross-attention module, it is not clear whether the proposed module is operated to function as the author’s intention, because all experimental results include MIM, without an independent evaluation of the CRAFT module itself. Furthermore, the significant performance gap between with and without CroCO pre-training raises further doubts about whether the proposed module is functioning as intended.\n\n\nAnother concern is the lack of explanation regarding the justification for cross-attention. I think that the authors should provide a more descriptive explanation as to why Transformer is advantageous for feature matching compared to CNN. Although feature similarity is explicitly calculated through the cross-attention module, the relatively weaker inductive bias of Transformers does not necessarily align with the effectiveness for feature matching." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 1 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to Weakness Section" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The Paper is well written, outlining the shortcomings of contemporary/previous works and explains clearly the approach followed by the paper to overcome those shortcomings.\n2. Proposes to use Full Cost Volume instead of Epipolar Cost Volume. Employs MIM-based pre-training and then employs Cross-Attention.\n3. Evaluates the Model performance on relevant datasets to point-out the improvement in the shortcomings of previous methods on KITTI and Cityscapes dataset." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The Paper deals with using Transformer architecture for estimating depth using multiple frames in a self-supervised setting, without using explicit pose information. Paper Proposes to use Full Cost Volume instead of Epipolar Cost Volume. Employs MIM-based pre-training and then employs Cross-Attention. Paper claims to deal with noise much better and does not need an explicit pose network or pose information" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Qualitative results provided in the Figure 4 is 'not 'good'. Where there is improvement in depth prediction where the Figure 4 highlights using the red boxes but on non-highlighted region of the depth prediction is seen, we see worse results.\na. Trees in the second image is definetly not sharper compared to Dynamicdepth & Manydepth as presented in the paper.\nb. Buildings in the background are much more clearer for DynamicDepth and Manydepth for the first image.\n\n2. Since the tasks proposed here is very close to Multi View Stereo tasks, Could the authors provide the 3D Reconstruction view of the scene to compare with earlier MVS methods ?\n\n3. Limited evaluation on outdoor scenes, would be great to see the efficacy of the model on indoor dataset such as ScanNet or NYU." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the weakness section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well-written and structured;\n\nThe experiments are thorough and conducted on multiple datasets;\n\nThe proposed method outperforms previous state-of-the-art methods;" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors target at the self-supervised multi-frame monocular depth estimation. Besides, they propose to use the cross-attention to replace conventional cost volume. The proposed method is validated on the KITTI and Cityscapes datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Using cross-attention to replace the cost volume has been explored in previous methods, such as [1]:\n\n[1] Revisiting Stereo Depth Estimation From a Sequence-to-Sequence Perspective with Transformers, ICCV 2021.\n\nCould you provide a detailed explanation of the structural differences between the proposed method and CRFAT? It currently appears that the differences are mainly at the output level. \n\nAdditionally, could you offer a comparison of the computational complexity between the proposed method and ManyDepth?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.\tCompare more recent methods in Cityscapes.\n2.\tPlease compare fairly. Comparison on fair feature extractors.\n3.\tCompare model complexity and inference time.\n4.\tOther questions refer to weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The idea is simple, clear and easy to reproduce.\n2. The writing is excellent." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a self-supervised multi-frame depth estimation framework, which introduces the cross-attention mechanism to implicitly learn the warping function instead of explicit epipolar warping. To this end, this paper proposes the CossAttention map and Feature aggregaTor (CRAFT), which is designed to effectively leverage the matching information of the cross-attention map by aggregating and refining the full cost volume. In addition, the CRAFT is used in a hierarchical manner to improve the depth prediction. Evaluations on the KITTI and Cityscapes datasets demonstrate that this work effectively in environments with dynamic objects and image noise." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.The method presented in this paper lacks novelty. The cross-attention mechanism or the transformer architecture has been widely used in depth estimation task, and can improve depth quality well. Using a cross-attention mechanism instead of traditional warping and coarse-to-fine strategies is not particularly novel.\n2.Performance improvements are limited. The work is based on a transformer architecture, while most of the compared methods are based on CNN architecture such as ResNet and HRNet, which is unfair. Moreover, the quantitative results in Table 4 are worse than the 2022 work and the recent work [1].\n\n[1] Miao X, Bai Y, Duan H, et al. Ds-depth: Dynamic and static depth estimation via a fusion cost volume[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024revisiting,\ntitle={Revisiting Emergent Correspondence from Transformers for Self-supervised Multi-frame Depth Estimation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xAZLCWbsTF},\nnote={under review}\n}" }, "abstract": { "value": "Self-supervised multi-frame depth estimation predicts depth by leveraging geometric cues from multiple input frames. Traditional methods construct cost volumes based on epipolar geometry to explicitly integrate the geometric information from these input frames. Although this approach may seem effective, the epipolar-based cost volume has two key limitations: (1) it assumes a static environment, and (2) requires pose information during inference. As a result, this cost volume fails in real-world scenarios where dynamic objects and image noise are often present, and pose information is unavailable. In this paper, we demonstrate that the cross-attention map can function as a full cost volume to address these limitations. Specifically, we find that training the cross-attention layers for image reconstruction enables them to implicitly learn a warping function within the cross-attention, resembling the explicit epipolar warping used in traditional self-supervised depth estimation methods. To this end, we propose the CRoss-Attention map and Feature aggregaTor (CRAFT), which is designed to effectively leverage the matching information of the cross-attention map by aggregating and refining the full cost volume. Additionally, we utilize CRAFT in a hierarchical manner to progressively improve depth prediction results through a coarse-to-fine approach. Thorough evaluations on the KITTI and Cityscapes datasets demonstrate that our approach outperforms traditional methods. In contrast to previous methods that employ epipolar-based cost volumes, which often struggle in regions with dynamic objects and image noise, our method demonstrates robust performance and provides accurate depth predictions in these challenging conditions." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Self-supervised Depth estimation; Multi-frame Depth estimation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/00c698e26a6a6c2fe9bf2f37816027600e6c73fe.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Revisiting Emergent Correspondence from Transformers for Self-supervised Multi-frame Depth Estimation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xBuURiCChw
Isometric Regularization for Manifolds of Functional Data
main
Active
Isometric regularization;Geometric reularization;Implicit Neural Representation;Manifold Learning;Neural SDF;Neural BRDF;Neural Operator
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;6;6;6;6
4;4;3;4;3
2;3;3;4;3
2;3;3;2;3
1;3;3;4;3
5.4
3.6
3
2.6
2.8
-0.408248
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the previous section" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-written, well-organized, and easy to follow. \n2. The concept of encouraging isometric embedding from $z$—the latent variable—to $h(z) = F(\\cdot, z)$, a functional representation, is interesting. \n3. The efficient estimation of a distortion measure that quantifies the lack of isometry is also novel. \n4. Extensive experiments, including applications to neural SDFs and DeepONets, effectively demonstrate the robustness of the proposed method" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes an isometric regularization technique for manifolds embedded in an (infinite-dimensional) function space. The goal is to enforce the learned operator $h: Z \\to \\mathcal{F}$ as an isometric embedding. Here, $Z$ represents a (finite-dimensional) latent space with the standard metric, and $\\mathcal{F}$ is the space of functions on $X$, equipped with an appropriately defined metric. A distortion measure that quantifies the lack of isometry (up to rescaling) is introduced, along with an efficient method for its estimation. Numerical experiments validate the effectiveness of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Algorithm 1 and its explanation in Appendix B should be expanded. For instance, further elaboration is needed on estimating the numerator in Equation 9. Additionally, in line 14 of Algorithm 1, should $G$ be given as $G = G_2 / G_1$? \n2. Given the significant computation involved in estimating the distortion measure, especially when computing gradients, a comparison of computational time should be included in the paper. \n3. While the method promotes an isometric embedding between the latent variable $z$ and its functional embedding $h(z) = F(\\cdot, z)$, it remains unclear if an autoencoder mapping the original input $u \\in \\mathcal{U}$ to its latent code is isometric. For example, in DeepONet, where $u$ represents the input function, if the encoder from $u$ to $z$ diverges substantially from an isometric mapping, enforcing isometry in $z \\mapsto h(z)$ may have limited impact." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- How does the computational cost of the proposed method compare to competing methods like LipDeepSDF? I’m particularly interested in understanding how the marginal cost of training the latent variable model with isometric regularization compares to the marginal benefit relative to cheaper baselines.\n- Why is it insufficient to enforce the smoothness of F with respect to the latent coordinates alone? Is this especially challenging from a technical standpoint? More challenging than isometric regularization?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- The paper is well-written, and the authors do a good job of introducing the mathematical preliminaries needed to describe their method.\n- The experiments persuasively make the case that isometric regularization empirically improves on the Lipschitz regularization baseline.\n- The numerical scheme the authors propose in Algorithm 1 to avoid costly Jacobian computations and sampling of spatial coordinates at training time is a reasonable strategy for approximating their regularizer." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new regularizer for implicit neural representations (INRs). Motivated by the fact that existing methods such as Lipschitz regularization result in overly smooth neural representations with respect to the spatial coordinates, the authors propose isometric regularization. This regularizer encourages the image of the latent variable model F(x,z) with respect to the latent z variable to form a well-behaved manifold, so that small changes in the latent variable lead to small changes in the resulting INR. As their regularizer requires expensive Jacobian computations at many spatial and latent inputs during training, the authors employ the Hutchinson trace estimator to avoid materializing full Jacobian matrices during training, and pre-sample input points to avoid the need to draw fresh samples during training. They then demonstrate that their method improves on an unregularized baseline and Liu et al’s Lipschitz regularization on surface reconstruction, BRDF learning, and neural operator learning tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I’m not sure if I understand the motivation for isometric regularization relative to simpler alternatives. The authors state that existing works such as Liu et al (2022) apply Lipschitz regularization to the latent variable models F to make them smooth across both the input space X and the latent space Z; this is problematic because for any fixed latent z, the function h(z) is overly smooth in X-space. While I agree that this is a notable deficiency in existing methods, why can one not simply enforce the smoothness of F with respect to the latent coordinates alone?\n\nThis question is especially salient in light of the high cost of isometric regularization relative to simpler baselines like Lipschitz regularization. I wonder whether the undeniable improvements arising from using isometric regularization outweigh its costs." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.\tIn Line 416, it says “The effect of regularization is prominent when the training data N is greatly reduced to 20”. However, it is difficult to object the significant improvement of IsoAD with N=20 from Fig. 6? Could you clarify this argument?\n2.\tHow does the strength of regularization choose for different datasets in the paper?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThe approach is grounded in differential geometry, offering a mathematically rigorous framework for managing functional data representations. \n2.\tIt demonstrated effectiveness on various data types, including neural signed distance functions (SDFs), bidirectional reflectance distribution functions (BRDFs), and neural operators, showcasing its versatility. \n3.\tThe method preserves data fidelity while regularizing, resulting in better interpolation, reconstruction, and generalization across different types and qualities of data." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a regularization method, Isometric Regularization, for handling manifolds in implicit neural representations (INRs) designed for infinite-dimensional functional data. It suggests that existing regularization methods often over-smooth data, leading to a loss of fidelity. To address this, the authors introduce a Riemannian manifold-based regularization that minimizes curvature and preserves the geometric consistency between the latent space and data manifold. Experiments across multiple data modalities demonstrate that this approach enhances the structure of the latent space, providing better generalization, especially for noisy and small datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThis work uses Hutchinson’s stochastic trace estimator to approximate the trace terms, but it is unclear how much additional computational cost this method incurs compared to the baselines. It is better to include a discussion of the computational requirement.\n2.\tIt utilizes offline samples from the \"ground truth functional data\" to compute the expectation of $J^TJ$. It should provide the justification of accessing the ground truth data.\n3.\tThe regularization process should involve parameter tuning, which might be non-trivial. The paper lacks discussion of how to choose the strength of isometric regularization when applying the method to novel datasets or configurations." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please clarify what is the total loss function that is being optimized for, and what parameters are being learnt, and what are being kept fixed?\n\nHow is the latent-space itself being learnt?\n\nDiscussion about training convergence and model size comparison (without needing new experiments).\n\nClarify ‘auto-decoder’ terminology." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Geometry-preserving and isometry-preserving loss functions are a novelty, and are a growing area of work in generative models. \n\nThe approach to add an isometry-preserving loss to conditional INRs is novel. Approximate algorithms to compute this isometry measure are proposed which enhance the strengths of the paper.\n\nMultiple types of results for proposed INR learning are shown including on 2D, 3D, and neural operator learning." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a way to conditional implicit neural models that may have been conditioned on additional information, such as a class label, or shape vector information. The core idea is to create a mapping from latent-codes to generated output in a way that preserves isometry – that is changes in input and changes in output should be related in a way that preserves distances and angles in the input and output spaces, subject to certain scaling parameters encoded by the function Jacobian. The paper shows experimental results that show the favorable properties of this approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Notation is unnecessarily complicated and hinders readability. For instance in the definition of the INR itself, \\mathcal{X} is more often than just \\mathbb{R}^m with m = 1, 2, or 3. Similarly, V is simply \\mathbb{R}. The lean toward generality isn’t helping with comprehensibility, although everything could be described without loss of generation for f: \\mathbb{R}^2 -> \\mathbb{R}. I would recommend taking such an approach and moving the generalized framework for an appendix instead. We suffer from the same issue once again when defining the structure of the latent space \\mathcal{Z} which always happens to be simply a vector space of the type \\mathbb{R}^m.\n\nThe premise of section 4, which is the core starts of in a confusing way by stating “Without proper regularization, the latent space can become ill-behaved, overfitting to the data instances, which is exacerbated by the infinite dimensionality of the function space” – here it is unclear what is meant by ‘latent space can become ill-behaved’ – because until this point \\mathcal{Z} is referred to as the latent-space, which is more or less given and not learnt end-to-end. E.g as I understood it, \\mathcal{Z} can refer to class categorical labels, or shape-features that are learnt elsewhere. So I am not at all clear what is meant by ‘latent space can become ill-behaved’.\n\nAfter reading section 4, we are presented with a way to compute a measure of isometry – as I understand it, this measure really applies to the core INR map, but it is not clear how this measure is actually used in a loss function. No loss function is described from what I can tell. It is unclear thus where latent-codes themselves are being optimized for, or is only the INR mapping portion being optimized. On reading the appendix, it is found that network parameters and the latent codes are both being optimized. I would recommend clarifying this upfront, as we do not see any loss function that suggests latent-code optimization, and this makes it hard to understand how this is being done. This also means that the latent-codes cannot be class categorical labels, as I had initially thought they could be. Overall, this leads to multiple types of confusion which I would suggest really clearly describing.\n\nThe paper also uses the term auto-decoder multiple times, which is unclear what that means. Do they mean ‘decoder-only’ architecture, or ‘auto-encoder’. I did additional digging among the references cited to see if I am missing something, but I could not find what an ‘auto-decoder’ means anywhere.\n\nMany conditional INRs also include an element of randomness that creates variation in output, which could be easily included in this model, but was less clear if this was already considered. \n\nDetails about training convergence, comparison with other approaches in terms training times, model size would help position the experiments more clearly." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How does the proposed isometric regularization scale with respect to the dimensionality of the latent space and the number of data points? Have you tested its performance on more complex datasets or tasks beyond those mentioned in the paper?\n2. How sensitive is the approach to the choice of hyperparameters, such as the weight of the isometric regularization? Is there a systematic way to select these parameters for different types of data?\n3. Can you provide more insights into the computational costs relative to other regularization methods, such as Lipschitz regularization or weight decay, especially for larger datasets?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The proposed method is tested across various types of functional data, including neural SDFs, BRDFs, and neural operators. This cross-domain applicability highlights the generality and versatility of the approach. Experimental results show significant improvements in the quality of latent space, interpolation, and reconstruction tasks. The breath of synthetic to realistic tasks also gives a degree of insight both into the underlying functionality of the method as well as its practical use. The authors also address the computational complexity of infinite-dimensional regularizations and use practical approximations for computational feasibility. Finally, the authors provide an interesting theoretical perspective by interpreting the regularization problem through a Riemannian lens, allowing future researchers to leverage differential geometry principles to derive further regularization algorithms over functional spaces.\n\nIn general, this was a concrete, interesting problem with a fresh take that was presented well and demonstrated very well. I enjoyed reading this paper and believe it will make a positive impact on the field." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a novel isometric regularization technique for Implicit Neural Representations (INRs) aimed at regularizing learned functions, a crucial component of optimizing over a theoretically infinite-dimensional space. This regularization is interpreted through the lens of Riemannian geometry over functional spaces, which helps maintain a well-behaved \"latent space\" while reducing overfitting, even for small or noisy datasets. The proposed method is validated across multiple scenarios, including synthetic, 2D, and 3D visual data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the paper demonstrates robustness and generalizability across several datasets, it does not provide sufficient exploration into the scalability of the approach for very high-dimensional latent spaces or large datasets. An obvious interesting application is in NeRFs, but in my view it is reasonable to delegate this to future work -- there is enough for the paper itself to stand.\n2. The paper lacks comprehensive ablation studies on the choices of parameters and effect of the individual components (e.g. the approximations), which may help future researchers in applying and developing from this method." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We present isometric regularization for manifolds of functional data, leading to robust data representation learning." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024isometric,\ntitle={Isometric Regularization for Manifolds of Functional Data},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xBuURiCChw},\nnote={under review}\n}" }, "abstract": { "value": "While conventional data are represented as discrete vectors, Implicit Neural Representations (INRs) utilize neural networks to represent data points as continuous functions. By incorporating a shared network that maps latent vectors to individual functions, one can model the distribution of functional data, which has proven effective in many applications, such as learning 3D shapes, surface reflectance, and operators.\nHowever, the infinite-dimensional nature of these representations makes them prone to overfitting, necessitating sufficient regularization. Naïve regularization methods -- those commonly used with discrete vector representations -- may enforce smoothness to increase robustness but result in a loss of data fidelity due to improper handling of function coordinates. \nTo overcome these challenges, we start by interpreting the mapping from latent variables to INRs as a parametrization of a Riemannian manifold. We then recognize that preserving geometric quantities -- such as distances and angles -- between the latent space and the data manifold is crucial. As a result, we obtain a manifold with minimal intrinsic curvature, leading to robust representations while maintaining high-quality data fitting. Our experiments on various data modalities demonstrate that our method effectively discovers a well-structured latent space, leading to robust data representations even for challenging datasets, such as those that are small or noisy." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Isometric regularization", "Geometric reularization", "Implicit Neural Representation", "Manifold Learning", "Neural SDF", "Neural BRDF", "Neural Operator" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/fe3a1d3b061fec86139b5a6c12eaa50e2ed7e7fc.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/e3635f39908f81375ad59129941eb81b7ade6007.zip" }, "title": { "value": "Isometric Regularization for Manifolds of Functional Data" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xByvdb3DCm
When Selection meets Intervention: Additional Complexities in Causal Discovery
main
Active
causal discovery;selection bias;experiments;interventions
causal reasoning
5;6;6;6;6
2;3;3;3;3
3;3;3;3;3
2;3;3;3;3
3;2;3;3;2
5.8
2.8
3
2.8
2.6
1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "Update: It has been fixed. Thank you.\n\n---\n\n~~Dear Reviewer VRoQ,~~\n\n\n~~Thank you for your careful review. It appears the review intended for another submission may have been posted here by mistake. Could you please help us in resolving this?~~\n\n\n~~Thank you,~~\n\n~~-Authors of Submission 1361~~" }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": { "value": "[Fixed] Possible review submission error" }, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- In example 2: This will be much more convincing if there is a numerical example that actually shows this (like example 1). Otherwise, it's not obvious to me why selection might imply that $X_1, X_3$ are dependent when conditioning on $X_2$.\n- In section 3.2: It is claimed that interventional distributions are not Markovian due to the original dag due to $X_1$ not being independent of $X_3$ when $X_2$ is conditioned on. The open path here goes through $X_2^*$. However, in example 4 it is claimed that conditioning on $X_2$ automatically conditional on $X_2^*$, This seems contradictory to the previous statement, what am I missing here?\n- Thm 1: If this is contrasted directly with the regular Markov property in DAGs, it will help clear up what the extra condition here is. Reading this section, the extra condition has not been motivated with respect to the examples shown. For example, how does this theorem differentiate DAGs $\\mathcal{G}$ and $\\mathcal{H}$.\n- I don't really understand how Lemma 1 \"misses key distributional information\", could this be made more explicit?\n- L291: Why does the twin graph have fewer degrees of freedom with the invariance constraint? And why does it lead to CIs not implied by d-separations? The claim in L293 is also not clear at all to me.\n- Section 3.3: Intuitions behind the lemmas and why they are needed would greatly help here. Right now they are hardly motivated and tough to understand given the dense notation.\n- Algorithm 1 step 2: how is this done? What method is used to test if the conditional distributions are the same or not?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The authors have found an interesting flaw in previous methods in the presence of selection bias\n- The paper is well motivated with clear examples. Some examples can be made clearer (see below)" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The work shows that current theory for dealing with interventional causal discovery is insufficient under selection bias, as there are cases where the selection mechanism takes place before the intervention. The authors then propose a twin graph framework where the selection happens before the intervention and then define a Markov property on this twin graph that implies certain independences in distributions when conditioned on the selection mechanism. The authors then propose a method that constructs the graph based on the Markov property up to an equivalence class." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Some bits of the exposition are unclear (see below)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Please refer to the questions mentioned in Weaknesses.\n\n2. I think theoretically the proposed framework (the proposed graphical model + algorithm) should be able to handle interventions without selection bias, but how does the proposed algorithm perform empirically compared to existing methods in scenarios without selection bias?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The selection bias in interventional experiments is an under-explored but crucial issue in causal discovery, and the authors use two clear examples to illustrate why this problem matters and why the existing methods and simply augmenting DAG fail.\n\n2. The authors provide a solid theoretical foundation in this paper by (1) rigorously defining the interventional twin graph and characterizing its Markov properties and (2) proving the soundness of the proposed algorithm.\n\n3. Synthetic experiments show that the proposed method outperforms baselines in handling selection bias and remains robust as the number of variables increases. It also uncovers novel causal relationships in real-world applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the issue of selection bias in interventional causal discovery, highlighting how existing methods fall short when subjects are selectively enrolled in experiments. To address this, the authors introduce a new graphical representation called \"interventional twin graph\" that explicitly represents both the observed world (where interventions are applied) and the counterfactual world (where selection occurs). They characterize the Markov properties of the proposed graphical model and develop an algorithm built upon FCI based on the proposed graphical model and Markov properties, named CDIS (Causal Discovery from Interventional data under potential Selection bias). The authors prove the soundness of CDIS. Experiments conducted under selection bias conditions demonstrate that CDIS outperforms baselines on synthetic datasets and uncovers novel causal relationships in real-world applications." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the introduction of the interventional twin graph and its Markov properties is rigorous, it may be challenging for readers to grasp at first glance due to its complexity. Providing a high-level explanation to offer an intuitive understanding would greatly benefit readers.\n\n2. The interventional twin graph is more complex than a simple DAG and involves additional nodes. It would be helpful if the authors discussed the computational cost of the proposed model compared to the simpler DAG, including an analysis of the algorithm's computational complexity under the new graphical model.\n\n3. The authors did not address the identifiability guarantees of the proposed method. It would be useful to know if the method can reliably identify the selection variables and under what conditions the true interventional twin graph can be identified.\n\n4. Minor typos:\n * Line 48: \"We show that existing existing graphical representation paradigms\" --> \"We show that existing graphical representation paradigms\"\n * At the end of line 169: \"models a completely different scenario\" needs to be revised" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What are the main challenges that affect the precision and completeness of the algorithm? How sensitive is it to the unmeasured confounders?\n2. If selection bias is detected through diagnostics, how can this information be leveraged to help causal discovery?\n3. Any data points on the computational cost of the algorithm?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The problem of selection bias is important yet often overlooked in existing interventional causal discovery. The setting is general.\n2. This paper is technically sound, with clear formulation of the causal DAG and Markov properties. \n3. The illustrative examples enhance the paper's clarity, helping readers better understand the concepts." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the limitations of current graphical models and causal discovery methods in handling pre-intervention selection bias in interventional causal discovery. To overcome these limitations, the authors propose a novel twin graph model that effectively captures both the observed world and the counterfactual world where selection occurs. The paper establishes the Markov properties of this new model and introduces a provably sound algorithm for identifying causal relationships. The effectiveness of this approach is demonstrated through synthetic and real-world experiments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "More comprehensive results and explanations of the empirical studies would be beneficial to support the effectiveness of the proposed algorithm. For example:\n1. For the simulation, can you report the proportion of true causal edges estimated as directed edges by the algorithm? Additionally, a comparison of the output graphs with the ground truth would illustrate how the new algorithm perform differently from other methods under selection bias. \n2. For the gene application, can you provide more comprehensive analysis of the result? \n3. For the education dataset, can you explain why the pre-intervention selection bias is a potential issue? Highlighting and interpreting key information of the resulting graphs would be helpful, as the current graph and variable names are difficult to follow.\n\nMinor comments about clarity:\n- The notation in this setting is dense and improvements of readability can be helpful for readers less familiar with the area. For example, \"CI\" in line 93 and different types of arrows in Example 7 can be clarified before their first appearance.\n- Typos: line 48 \"existing\", line 295 \"false\"." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Can you think of any ways in which you could evaluate the completeness of your proposed algorithm in the simulations? In a causal discovery paper, we should be concerned with making true discoveries, not just avoiding false discoveries.\n\nDoes the paper's approach generalize to arbitrary latent variables?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper clearly lays out the problem of selection bias in causal discovery and why certain natural approaches to the problem are not sufficient. The paper also puts forward a very general solution to the problem and considers its consequences. Overall the paper is well-written, despite being notation-heavy." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper considers the problem of interventional causal discovery in the context of selection bias. The paper first explains why simply augmenting the causal DAGs with a selection variable is insufficient. The paper then goes on to introduce the concept of a twin graph, in which every node is replicated in a counterfactual/\"basal\" world, and there is a defined set of rules by which the relationships of the basal world are constructed. Based on this construction, the paper goes on to define the set of d-separations that are implied by this model and establish the graphical criteria for Markov equivalence. The paper provides an algorithm to recover the Markov equivalence class and evaluates it on both simulated and real-world data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "One thing that was unclear to me is how complete is the paper's characterization of Markov equivalence classes in the given model. I would think that the Markov equivalence class would encode all DAGs with the same conditional independence structure (i.e. the right-hand side of the implications in Theorem 1). However, the equivalence structure is defined with respect to the left-hand side of the implications in Theorem 1. This would seem to imply that the equivalency classes are not as fine-grained as they potentially could be.\n\nThe algorithm provided also suffers from this issue, in that the authors point out that it may not be complete. It is not clear to me how useful it is to have a causal discovery algorithm that is sound but not complete. The trivial algorithm that says there are no causal relationships is sound but not useful.\n\nThe simulation study not reporting any information on completeness is disheartening. While I understand that the paper does not contain any guarantees on completeness, in the simulations there is access to the ground truth. So it is hard to see how there is no way to evaluate the ability to discover some fraction of those relationships.\n\nAt a higher level, I'm not sure how much of the framing of the paper is specific to the selection problem. It seems like the approach of the paper is tackling the more general problem of causal discovery with unobserved latent variables. If that is not the case, then the paper should explain how their methods do not generalize to the latent variable setting." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "-" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- This paper studies a relevant and interesting problem - both mathematically and philosophically. It considers the question: \"What does selection bias actually mean\" and proposes a sound answer and the necessary mathematical framework to deal with such situations.\n- Based on the framework to treat selection bias, a sound and complete causal discovery algorithm is proposed.\n- The method is evaluated not only on synthetic data, but on real-world examples as well. This provides some confidence that it may be useful in practical applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the important but overlooked problem of selection bias in interventional causal discovery, where subjects are selectively enrolled in experiments (e.g., drug trials focusing on patients with specific conditions). The authors show that existing methods for interventional causal discovery fail to properly handle selection bias, as subtle differences in when and where interventions occur can lead to inconsistent conditional indepenence relations. To address this, they introduce a novel graphical model called the \"interventional twin graph\" that explicitly accounts for both the observed world (where interventions are applied) and the counterfactual world (where selection occurs before interventions), along with characterizing its Markov properties and equivalence classes. They develop a provably sound algorithm called CDIS (Causal Discovery from Interventional data under potential Selection bias) that can identify causal relations and selection mechanisms from data with soft interventions and unknown targets. Through experiments on both synthetic data and real-world applications in biology and education, they demonstrate that their method achieves higher precision in identifying true causal relations compared to existing approaches when selection bias is present." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The biggest weakness I see is the presentation of the paper. The first two sections are dense, but give a good introduction and motivation to the problem, based on good illustrations in Examples 1 and 2.\n\nHowever, Section 3 is the most painful piece of text I have read in a while. It relies mostly on mathematical notation to bring across the main points and lacks the contextualization in prose. I appreciate that examples are given in Section 3, but even those are a bit cryptic and fail to provide an accessible intuitive understanding. I suppose there are three main reasons for this: (1) Writing about complex SCMs is inherently difficult and a certain level of formalism is necessary - not much you can do here. (2) The amount of content in the main paper, given the page limit might a bit too much. Some of the more technical parts could be relegated to the appendix and exchanged for more contextualization. (3) The text could consider the reader's state of mind more. Some examples:\n\n- L211f: introducing the functions $f^*$ uses the mathematical symbols for the corresponding variables in the counterfactual basal world to introduce them, but does not use the word \"counterfactual\". That means as a reader, I either have it in working memory, or I have to go back to the definition and jump back again to the sentence to parse it.\n- As far as I can tell, abbreviations like \"CI\" and \"MAG\" aren't defined, or used before they are defined, e.g. \"PAG\".\n\nSuch presentation choices add unnecessary mental effort for understanding, and I would think twice if I'd go back to this paper and build on it for future work - not because it's wrong, but because of the mental effort to access the information." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024when,\ntitle={When Selection meets Intervention: Additional Complexities in Causal Discovery},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xByvdb3DCm},\nnote={under review}\n}" }, "abstract": { "value": "We address the common yet often-overlooked selection bias in interventional studies, where subjects are selectively enrolled into experiments. For instance, participants in a drug trial are usually patients of the relevant disease; A/B tests on mobile applications target existing users only, and gene perturbation studies typically focus on specific cell types, such as cancer cells. Ignoring this bias leads to incorrect causal discovery results. Even when recognized, the existing paradigm for interventional causal discovery still fails to address it. This is because subtle differences in _when_ and _where_ interventions happen can lead to significantly different statistical patterns. We capture this dynamic by introducing a graphical model that explicitly accounts for both the observed world (where interventions are applied) and the counterfactual world (where selection occurs while interventions have not been applied). We characterize the Markov property of the model, and propose a provably sound algorithm to identify causal relations as well as selection mechanisms up to the equivalence class, from data with soft interventions and unknown targets. Through synthetic and real-world experiments, we demonstrate that our algorithm effectively identifies true causal relations despite the presence of selection bias." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "causal discovery", "selection bias", "experiments", "interventions" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/4007a87b0e47f7c459c6d887383f2fc559a863f8.pdf" }, "presentation": null, "primary_area": { "value": "causal reasoning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "When Selection meets Intervention: Additional Complexities in Causal Discovery" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xCFdAN5DY3
A Foundation Model for Weather and Climate
main
Active
Foundation models; atmospheric physics; weather; climate; fine-tuning; super-resolution
applications to physical sciences (physics, chemistry, biology, etc.)
3;3;5;6
4;4;4;3
3;2;2;3
2;3;3;3
3;3;3;3
4.25
3.75
2.5
2.75
3
-0.777778
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Major: \n- Do you compute area-weighted RMSEs? If not, I strongly encourage fixing this (especially for the weather forecasting experiment; see e.g. Weatherbench2).\n- For the fine-tuning experiments, do you start them based on the weights resulting from the first or second pre-training stage? If the former, then the second pre-training seems to also be a form of fine-tuning and I would suggest avoiding using the term \"zero-shot\" for reporting the forecasting results.\n- From looking at Figure 3, I don't think that I agree with the author's interpretation that *\"It is interesting that reconstruction performance is relatively little affected by lead time at the lower end of masking ratios\"*. There's a clear cap between \"0h, global\" and \"6h, global\", especially for the lower end of masking ratios. Do I miss something?\n- Can you include downscaling results on more variables than only T2M?\n\nMinor:\n- What's the difference between the encoder and decoder blocks? Fig. 1 suggests these are identical... Is the difference some densification by the \"reconstruct batch\" module in Fig. 1? If so, can you explain this more and make it clearer?\n- Line 174: \"reduce the masking ratio to 50%\"... is this a typo (you use the same rate for the first stage)? What's correct?\n- Why is there no reference line in figure 5b)?\n- Please define what's meant by spatial and temporal RMSEs.\n- Can you include snapshots like Fig. 9 but at the native temporal resolution used for prediction (i.e. not a monthly mean)?\n- How does using different patch sizes (larger than the used size of 1) impact downscaling performance? \n- Why do you call the model Prithvi WxC?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Prithvi WxC is quite flexible as it can be used for a broad range of downstream applications, as convincingly shown in the experiments. \n2. The method contains original ideas such as the pre-training objective and using climatology-derived anomalies as targets, which I found interesting to read about.\n3. The paper is generally clearly written and easy to read." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a new foundation model, Prithvi WxC, for atmospheric modeling applications in weather and climate. Prithvi WxC was trained on 3-hourly data from 1980 to 2019 from the MERRA-2 reanalysis dataset based on a masked reconstruction/forecasting pre-training objective. \nThe model follows a transformer-based encoder-decoder architecture inspired by Hiera and MaxViT.\nAfterward, the model is fine-tuned for various downstream tasks: Medium-range weather forecasting, global and regional downscaling, and learning a gravity wave flux parametrization. These tasks have different sets of spatial resolutions, variables, and datasets, showing the flexibility of the foundation model." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper falls short of establishing a compelling case for Prithvi WxC as a foundation model for weather or climate. The practical significance and advantages of this approach remain inadequately demonstrated:\n\na.) While foundation models typically excel at zero-shot performance and data-efficient fine-tuning across diverse tasks, the evidence presented for Prithvi WxC's capabilities in these areas is not convincing. Baselines for the non-forecasting experiments are either very weak (interpolation-based downscaling) or non-existent (gravity wave experiments). Some highly relevant and simple baselines are: \n- How much worse(?) does Prithvi WxC perform on these tasks if you omit the pre-training stage (i.e. initialize with random weights instead of the frozen pre-trained ones, and train all parameters jointly from scratch on the tasks)? \n- How about completely removing the pre-trained transformer backbone (i.e. removing the Prithvi WxC block from Figures 12 & 13)? \n- For the latter, it would be also good to run an experiment where you replace the pre-trained Prithvi WxC backbone with some \"lightweight\" blocks (e.g. a (deeper) U-Net), trained in a task-specific way from scratch, to account for the huge difference in parameter counts if you completely remove Prithvi WxC. \n\nThese ablations would immensely help in understanding how useful the pre-training stage is for these downstream applications (e.g. does using pre-trained Prithvi WxC improve performance over such simple baselines? Is it more data-efficient?). Besides, otherwise, it is hard to see evidence for the claim in the conclusion that *\"Instead of building task-specific ML-models from scratch, these pretrained encoders can be used to develop more precise data-driven models of atmospheric processes\"*.\n\nb.) No ablations are included. I understand that training such a huge model is expensive but having a few ablations would have been very appreciated (perhaps, with a smaller-scale version of the model). For example:\n\n- How crucial is it to predict climatology-normalized targets as opposed to normal per-variable means/stds? \n- What's the forecasting performance of Prithvi WxC after the first pre-training phase?\n- How important is local vs global masking? What about the masking rates?\n- What's the line of thought behind randomizing the distance between input timesteps? Can the model only use one input timestep? I presume this is possible by masking the corresponding snapshot by 100%, but no experiments with this setting are shown. \n\nc.) The weather forecasting results seem lukewarm, albeit it is hard to judge because the comparison is not apples-to-apples.\n- Prithvi WxC is trained and evaluated on Merra-2. The baselines are evaluated on ERA5. These reanalysis datasets have different spatial resolutions. The evaluation years seem to be different too (correct me if I'm wrong). It would help to fix this mismatch. For example, given the foundational nature of Prithvi WxC... why not fine-tune it on ERA5 directly? Showing that it can be competitive to these baselines in an apples-to-apples comparison would be a very strong result.\n- Based on the mismatched comparison, Prithvi WxC seems to be competitive on 6h to 12h forecasts but it's quite notable that its performance implodes compared to the baselines for longer lead times. It is very unclear why. I wouldn't necessarily expect this version of Prithvi WxC to be state-of-the-art, but the performance does seem underwhelming. Especially given that the authors did \"several things\" to tune these results (i.e. a second forecasting-specific pre-training stage and autoregressive rollout fine-tuning).\n- The hurricane evaluation includes hurricanes from 2017 to 2023. This seems to overlap with the training data period (up to 2019). \n- Either Figure 6 or its analysis in the main body of the text (lines 251-253) is wrong because I see all of the three models do best on exactly one of the three RMSE figures.\n- For the hurricane forecasting experiments, I would appreciate a comparison to the state-of-the-art models included in the weather forecasting experiments (e.g. GraphCast) which have shown to be better than FourcastNet.\n\nd.) The downscaling problem setup is artificial. Downscaling coarsened of existing reanalysis/model outputs is not of much use in practice. A realistic and important downscaling application, as discussed in the Appendix, would be to downscale coarse-resolution model outputs to high-resolution outputs (either of a different model, observations, or the same model run at higher resolution).\n \ne.) The climate model parameterization experiments should be more carefully interpreted. \n- The model predicts outputs that are normalized by the 1980-2019 climatology. Unfortunately, decadal or centennial simulations of the future under a changing climate are inherently a non-stationary problem. It is highly unclear if Prithvi WxC would remain stable, let alone effective, under this highly relevant use case. This is particularly so as the in-the-loop (coupled to a running climate model) stability of ML-based climate model parameterizations is a well-known issue.\n- The selling point for ML-based emulators of climate model parametrizations is often their computational cheapness. Thus, the runtime of Prithvi WxC should be discussed. Given the large parameter count of Prithvi WxC it might be important to note its runtime as a limitation for these kinds of applications.\n- Line 461 claims that Prithvi WxC \"outperforms\" task-specific baselines but no baselines whatsoever are included in the manuscript for this experiment.\n- Are the inputs a global map? I am not familiar with gravity waves, but I believe that most physics parameterizations in climate models are modeled column-wise (i.e. across atmospheric height but ignoring lat/lon interactions). This is surely a simplification of these parameterizations, but it seems to indicate that they're highly local problems. What's the motivation for using global context then?\n- The end of the section should be worded more carefully, clearly stating the aforementioned limitations.\n\nf.) No scaling experiments are included. Thus, it is unclear how important its 2.3 billion parameter size is, how well the model scales, and how its size impacts performance on the downstream applications. Besides, vision and language models are usually released with multiple model sizes that cover different use cases (e.g. balancing inference speed with accuracy). It would be really useful to get these (and carefully compare them) for Prithvi WxC.\n\n2. Related work is insufficiently discussed. Please include an explicit section discussing it, focusing on:\n- Carefully comparing similarities/differences to existing weather foundation models (e.g. architectures, pre-training objectives, downstream applications etc.). Besides, ClimaX is not properly discussed in the paper. Given that it's also a transformer-based foundation model, validated on forecasting, downscaling, and climate emulation, it is very important to include it in the comparison. \n- Similarly, please discuss how exactly the masking technique in this paper relates to the ones proposed in Vandal et al. and McNally et al..\n- Carefully discuss how the architecture is derived from Hiera and/or MaxViT (and other papers of which components were derived, if any).\n\n3. While the authors transparently discuss some issues/limitations with their experiments (e.g. the evaluation data mismatches), it would be nice to also include an explicit paragraph or section on this (and include aforementioned things like the issues with the climate model parameterization experiments).\n\nMinor:\n- Can you properly discuss, and include a reference to, what a Swin-shift is?\n- Similarly, for the \"pixel shuffle layers\"\n- Line 39: Pangu -> Pangu-Weather\n- Line 48: Nowcasting should be lower-case\n- Equation 1: Consider reformulating this as an objective/loss function.\n- Also Eq. 1: What is $\\hat{X}_t$? What is $\\sigma_C$?\n- Line 93: $\\sigma^2_C = \\sigma^2_C(X_t - C_t)$ doesn't make sense to me.\n- Line 104: *\" same 20 year period that we used for pretraining.\"* .... Do you mean 40 year period? If not, which 20-year period from the 40-year training period did you use?\n- Line 157: Multiple symbols are undefined (e.g. $V_S$).\n- Line 169: It's not entirely clear what \"alternates\" means in this context.\n- Line 429: \"baseline\"... do you mean Prithvi WxC? \n- Line 507: \"improved\"... improved compared to what?\n- Figure 12: Do you mean 'downscale' on the right \"upscale\" block?\n- Sections D. 2.3 and D.2.4 in the appendix are literal copies of the corresponding paragraphs on pages 8 and 9. Please remove." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "### Minor issue: Weak baselines for downscaling\nThe comparison primarily involves interpolation-based methods and does not consider more advanced, AI-driven downscaling models or domain specific statistical/dynamical downscaling." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "### Originality and significance \nPrithvi WxC pushes the foundation model concept in atmospheric science further by expanding beyond just forecasting— seen in earlier models foundation model Aurora. Its architecture and training approach enable it to tackle a variety of downstream tasks, such as downscaling and parameterization, making it a valuable tool for both short-term weather predictions and long-term climate modeling. With its modular design, Prithvi WxC can be adapted flexibly to new tasks that combines AI with physical climate models.\n\n \n### Quality\nThe authors have thoroughly evaluated Prithvi WxC across a range of tasks, including zero-shot reconstruction, downscaling, and extreme event forecasting. The extensive validations across different downstream tasks support Prithvi WxC’s adaptability and effectiveness in diverse weather and climate applications.\n\n\n\n### Clarity and open research\nThe paper is organized in a clear, logical flow, moving smoothly from motivation and background to model architecture, objectives, and results. Key ideas, like the mixed masking and forecasting objective, are presented in a way that makes the technical contributions accessible to both AI and climate science audiences. Code and comprehensive supplementary materials are provided." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents Prithvi WxC, a new foundation model designed to support a wide range of weather and climate applications. Built on a large, transformer-based architecture, Prithvi WxC is trained on the extensive MERRA-2 dataset, which covers 160 variables capturing atmospheric data. The model is unique in its ability to address multiple tasks—including forecasting, downscaling, and parameterization—making it versatile in handling both regional and global weather patterns." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "### Reliance on a single reanalysis dataset (MERRA-2)\nMERRA-2, with its relatively lower spatial resolution, is not commonly used in AI-driven weather and climate research, where higher-resolution datasets like ERA-5 are preferred for their superior predictive accuracy. The authors themselves acknowledge this limitation, citing the weaker performance of Prithvi WxC in hurricane track forecasting compared to the ERA-5-trained FourCastNet, attributing this discrepancy to MERRA-2's lower spatial resolution. \n\nPrithvi WxC’s exclusive training on the MERRA-2 reanalysis dataset raises questions about whether it truly qualifies as a foundation model. The narrower training base implies that the model may be learning a representation more specific to MERRA-2 characteristics, along with its biases and errors, rather than capturing a broader, more generalized understanding of weather and climate dynamics. By contrast, Aurora foundation model are pretrained on six diverse weather and climate datasets, including ERA-5, CMCC, IFS-HR, HRES Forecast, GFS Analysis, and GFS Forecasts, which span various sources like forecasts, analyses, reanalyses, and climate simulations [1]. This multi-source approach ensures a broader, more representative foundation that enhances versatility across diverse applications.Prithvi WxC would benefit from a similar multi-dataset training approach to strengthen its robustness and generalizability, which would then qualify it as a foundation model.\n\n[1] Bodnar, C., Bruinsma, W. P., Lucic, A., Stanley, M., Brandstetter, J., Garvan, P., ... & Perdikaris, P. (2024). Aurora: A foundation model of the atmosphere. arXiv preprint arXiv:2405.13063.\n\n### Ablation Study\nWhile the authors propose a novel objective function, they only hypothetically attribute Prithvi WxC's strong short-term forecasting performance to its masking objective, without providing empirical evidence. This lack of testing weakens the claims about the model’s unique architecture and objective function. The observed performance could be influenced by several factors: the mixed objective itself, specific network structures or attention mechanisms, and choices made in the pretraining setup.\n\nWithout ablation experiments, the paper's assertions about the effectiveness of these innovations remain speculative, leaving readers uncertain about the impact of each component. An ablation study could isolate the contributions of these elements and would strengthen the paper by making its claims more concrete and providing clearer insights into Prithvi WxC’s architectural and training contributions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In addition to the weaknesses above, can the authors clarify the following:\n\n1. How is the masking pre-training strategies better than just using variable lead-time that includes delta_t = 0? \n2. How is the local-global attention better than existing pre-training strategies of e.g., patch-variable-level tokenization employed in e.g., ClimaX? \n3. With masking as an additional pre-training strategy, what is the cost-performance tradeoff? \n4. Has the authors measure Prithvi's parameterization stability in an online-coupled setting?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Operationally, the large-scale pre-training of 2.3B FM is impressive in the emerging field of AI4Science. The use of large-scale dataset is also noteworthy. The writing is clear and the downstream tasks are clearly defined, and touch upon some of the hardest challenges facing the field. The use of beyond-atmospheric variables (coupled ocean + land) is also welcomed to build a full Earth system FM." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a new foundation model for weather and climate, called Prithvi WxC, with 2.3 billion parameters, and trained on relatively unconventional yet interesting reanalysis products of MERRA-2. The authors use a relatively novel pre-training strategy in this field, where in addition to forecasting-only pre-training, they also combine masking for reconstruction in the hope of better self-supervision, among several upsides (e.g., natural extension to data assimilation with sparsely-gridded observation). The FM is then evaluated on several downstream tasks, including forecasting, downscaling, and parameterization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are several major weaknesses in the paper, including non-existent/weak baselines and misleading claims, summarized below:\n\nMajor weaknesses\n1. The term foundation model for this work is misleading, since in contrast to other similar works e.g., ClimaX, Aurora, the model is only trained on one data source which is MERRA-2. The problem: reanalysis products are by construction applicable for short-medium range applications as they are used either to evaluate NWPs or used as ICs for the next forecasting window. The climate dataset being missing, makes the claim of Prithvi being also a climate FM is an overreach at best. \n\n2. Related to the first point, the paper does not evaluate on any climate-related tests despite claiming it to be a climate FM too: (a) is the model stable over 50-100 years climate rollout? (b) what is the climate drift/bias compared to SOTA climate emulator? The paper applies downscaling to a climate dataset CORDEX, but this is less of a climate question and just a general downscaling problem since the former is more concerned about getting the long-term statistics, rather than a high-resolution state realization, correct (which is near impossible to nonlinear chaotic systems such as the Earth system). \n\n3. The downscaling benchmarking is also unacceptable as the baseline is too weak (e.g., bilinear, nearest neighbor). The author should either remove this part or add stronger baselines where the SOTA is at least a deep learning based method. Also, there is no benchmarking on the parameterization downstream task. At least use existing DL-based models from recent works. \n\n4. The forecasting performance appears to be unconvincing at best: with 2.3B parameter, it performs worse than e.g., <50M parameter GraphCast (~40x smaller) even at short lead time of 66 hours (<3 days). Even when the authors mention this result is \"zero-shot\" (which I find it unconvincing since there is rollout fine-tuning still) and the target is different (MERRA-2 vs ERA5), the obvious larger error growth (Figure 4) is alarming as it may not be useful for long-range climate forecasting. Also, why not benchmark against MERRA-FourCastNet in the forecasting task since this provides for a fairer comparison as both are trained with MERRA (as in the case for hurricane prediction). Finally, figure 4c (single line: forecasting cloud) is a case-in-point: the lack of sufficient apple-to-apple baselines where training/eval is done on MERRA-2. \n\nOverall, I find the results unconvincing given the lack of data sources, proper baselines, and inferior performance gain despite Prithvi being orders-of-magnitude larger than any SOTA model. As a side note: I believe the task of downscaling and parameterization is similar in that both attempts to resolve small-scale physics in an otherwise coarse-resolution model. I suggest the authors combine or use different downstream tasks e.g., climate projection." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Architectural flexibility: Can you provide a detailed explanation of the adaptability of the architecture when applied to non rectangular grid systems, such as those used for ocean simulation or polar regions?\nGeneralization strategy: What steps are taken to ensure the performance of the model when training or fine-tuning on datasets outside of MERRA-2, such as ERA5 or even higher resolution datasets?\nHurricane forecast: Can you provide more details on how the hurricane trajectory prediction of this model compares to specific task models such as FourCastNet, especially in areas with sparse data coverage?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Originality: The introduction of basic models for weather and climate applications is novel and important. Unlike specific task models such as FourCastNet and GraphCast (Lam et al., 2022), Prithvi WxC addresses a wide range of tasks and effectively narrows the gap between artificial intelligence models for specific weather tasks and general artificial intelligence base models.\nQuality: The mixed pre training objective of this model combines masking and prediction, which is robust, especially because it uses climate bias rather than just future state prediction, which enhances the adaptability of the model. The model also showed impressive results in zero sample evaluation of reconstruction and autoregressive prediction, outperforming the baseline in a short delivery cycle.\nMeaning: The ability to generalize to multiple downstream tasks, such as downscaling and gravity wave parameterization, suggests that this model has the potential to have a significant impact on weather and climate modeling." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper titled \"Basic Models for Weather and Climate\" introduces Prithvi WxC, a 2.3 billion parameter basic model for various weather and climate tasks. These include downscaling, autoregressive prediction, and extreme event estimation. The model is trained on 160 variables from the MERRA-2 dataset and combines mask reconstruction with prediction tasks to learn from various atmospheric data. Its encoder decoder architecture and ability to work with different spatial topologies make it suitable for global and regional weather modeling." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Generalization to other datasets: Although the model performs well on MERRA-2 data, its generalization ability to other datasets such as ERA5 or CMIP has not been fully explored. Validation on different datasets will better demonstrate its robustness.\nLong term prediction: The accuracy of Prithvi WxC decreases with the extension of the prediction window, especially beyond 66 hours, and its performance is poor compared to models such as Pangu. A deeper investigation into how to maintain performance within an extended time frame can improve its utility in medium - and long-term forecasting." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce a foundation model for weather and climate data. The model is validated on a number of downstream tasks ranging from downscaling (super-resolution) to forecasting." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024a,\ntitle={A Foundation Model for Weather and Climate},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xCFdAN5DY3},\nnote={under review}\n}" }, "abstract": { "value": "Triggered by the realization that AI emulators can rival the performance of traditional numerical weather prediction models running on HPC systems, there is now an increasing number of large AI models that address use cases such as forecasting, downscaling, or nowcasting. While the parallel developments in the AI literature focus on foundation models -- models that can be effectively tuned to address multiple, different use cases -- the developments on the weather and climate side largely focus on single-use cases with particular emphasis on mid-range forecasting. We close this gap by introducing Prithvi WxC, a 2.3 billion parameter foundation model developed using 160 variables from the Modern-Era Retrospective Analysis for Research and Applications, Version 2 (MERRA-2). Prithvi WxC employs an encoder-decoder-based architecture, incorporating concepts from various recent transformer models to effectively capture both regional and global dependencies in the input data. The model has been designed to accommodate large token counts to model weather phenomena in different topologies at fine resolutions. Furthermore, it is trained with a mixed objective that combines the paradigms of masked reconstruction with forecasting. We test the model on a set of challenging downstream tasks namely: Autoregressive rollout forecasting, downscaling, gravity wave flux parameterization, and extreme events estimation." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Foundation models; atmospheric physics; weather; climate; fine-tuning; super-resolution" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/bb8fc3ac3b95ec2203f1764d25c0ab9064058aa1.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/ee0ead1d738f3a4a7b0de0952ed83411592b1c33.zip" }, "title": { "value": "A Foundation Model for Weather and Climate" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xCMmtYOsiL
Series-to-Series Diffusion Bridge Model
main
Active
time series forecasting; diffusion model
learning on time series and dynamical systems
3;5;5;8
4;3;3;3
2;2;2;3
2;2;2;3
3;3;2;3
5.25
3.25
2.25
2.25
2.75
-0.727607
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "NA" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper utilizes the Diffusion Bridge Model to help the reverse process start from a more deterministic state, reducing the instability caused by noise and thereby facilitating better predictions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a comprehensive framework that encompasses most existing diffusion-based methods. Building on this foundation, the authors introduce the Series-to-Series Diffusion Bridge Model (S2DBM). Experimental results demonstrate that S2DBM delivers superior performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "weakness:\n1. There is a notation issue with \\(\\hat{\\gamma}_t\\) in line 203; the writing needs to be standardized. Additionally, it needs to be clarified whether the values of \\(\\hat{\\alpha}_t\\), \\(\\hat{\\beta}_t\\), and \\(\\gamma_t\\) should have a specific relationship to conform to the diffusion model.\n2. In line 411, it is mentioned that a comparison with timediff was made, but Table 2 does not include TimeDiff data while other baselines are present.\n3. The content of the paper appears to primarily build on existing work by combining the model guidance from TMDM and the non-autoregressive approach of timediff with the existing Brownian Bridge process. This integration seems to lack novelty." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weakness plz." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The authors provide a comprehensive summary of existing models, highlighting that their primary differences lie in the formulation of $\\hat{\\gamma_t}$. By introducing the Brownian bridge process into diffusion-based time series forecasting models, they establish a range of relevant properties." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper revisits the application of diffusion models for time series forecasting, presenting a unified framework that consolidates these methods. Building on this framework, the authors incorporate the Brownian bridge process to enhance prediction accuracy. The results demonstrate that the proposed approach outperforms existing diffusion-based forecasting models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. In Line 259, the title of Proposition 1 is \"Brownian Bridge between Historical and Predicted Time Series.\" However, the Brownian bridge’s endpoint is set to $h$, the Prior Predictor’s forecasted value of $x$. The paper provides no explanation as to why $h$ is chosen as the endpoint for the Brownian bridge. \n2. It would be beneficial for the authors to clarify, from a theoretical perspective, why the Brownian bridge is integrated into the diffusion model for time series forecasting. Specifically, how does its ability to \"pin down\" the diffusion process at both ends help reduce instability from noisy inputs and enable the accurate generation of future features based on historical time series?\n3. In line 195, Theorem 1 concerns non-autoregressive diffusion processes. However, the summarized models, CSDI and SSSD, do not appear to be non-autoregressive models. Could there be an issue with the theorem here?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Q1.\nCan you elaborate on the computational efficiency of $S^2DBM$, particularly in relation to larger datasets or real-time forecasting applications?\n\nQ2.\nWhat specific advantages does the Brownian Bridge process offer over other stochastic processes for time series forecasting in the context of your model?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "S1.\n$S^2DBM$ integrates the Brownian Bridge process into time series forecasting using diffusion models.\nBy redefining the diffusion framework, the authors introduce a new model and consolidate various non-autoregressive diffusion techniques into a comprehensive framework, elucidating their interrelationships and underlying principles. \n\nS2.\nThe authors conduct thorough theoretical groundwork. \nThe empirical evaluations are robust, utilizing diverse real-world datasets to benchmark the model's performance against SOTA methods.\n\nS3.\nThe draft is well-structured and organized. \nThe introduction succinctly outlines the problem context and motivates the need for the proposed model." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents the Series-to-Series Diffusion Bridge Model ($S^2DBM$), a promising approach in time series forecasting using diffusion models. \nTraditional diffusion models often struggle with deterministic point-to-point predictions. \n$S^2DBM$ addresses this by leveraging the Brownian Bridge process to reduce noise and improve accuracy in reverse estimations, effectively capturing temporal dependencies in time series data. \nThe model incorporates historical data as informative priors, stabilizing diffusion and enhancing point-to-point prediction capabilities. \nExperimental results on various datasets show that $S^2DBM$ outperforms existing diffusion-based and other state-of-the-art time series models, demonstrating superior accuracy in both deterministic and probabilistic forecasting tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1. \nSome assumptions and derivations in this work could be better justified. \nFor example, the choice of using a Brownian Bridge process warrants a more detailed discussion on why it is preferred over other stochastic processes. \nProviding empirical evidence or theoretical reasoning for the choice of this process could strengthen the argument for its effectiveness in reducing randomness in forecasting.\n\nW2.\nWhile the paper presents strong performance metrics, it lacks robustness testing under adverse conditions, such as noisy inputs or missing data. \nFuture experiments should include scenarios with varying levels of data quality to demonstrate how $S^2DBM$ handles real-world challenges.\n\nW3.\nCurrently, the experiments focus primarily on a single configuration. \nInvestigating how different settings of hyperparameters, such as the choice of the prior predictor or conditioning modules, impact performance could provide valuable insights. \nFor instance, evaluating the effects of varying the number of diffusion steps or using alternative conditioning mechanisms could highlight the robustness of $S^2DBM$ across diverse scenarios." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In Table 2, your models do not achieve the best performance across all datasets. Could you explain why linear models perform best in several cases, while your models do not?\n2. Since $c = E(x)$ and $h = F(x)$ are two important components of your algorithms, I am curious about the role of F in enhancing performance. What impact does it have? An ablation study of F may help." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The presentation of this paper is clear and easy to follow, with algorithms well articulated.\n2. The paper provides a unified framework for several existing diffusion-based time series models, which is well-supported by theoretical foundations. The insights offered are impressive.\n3. Both point-to-point and probabilistic forecasting are addressed in the reverse process, broadening the algorithm's application scope." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses point-to-point time-series forecasting through the use of diffusion models. The authors propose a unified non-autoregressive framework that encompasses most existing diffusion-based time-series models. Building on this framework, they introduce S2DBM, which incorporates the Brownian Bridge process. By integrating historical information via informative priors and conditions, S2DBM effectively reduces randomness and enhances forecasting accuracy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Some expressions reference prior work without sufficient explanations, which can hinder comprehension. For instance, in the last equation of Section 3.1, the introduction of \\(y_\\theta\\) relies solely on a citation, making it time-consuming for readers to fully grasp its meaning.\n2. Table 4 presents results only for a horizon of 96, omitting long horizon settings. This limits the analysis and may leave important insights unaddressed." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Series-to-Series Diffusion Bridge Model" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024seriestoseries,\ntitle={Series-to-Series Diffusion Bridge Model},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xCMmtYOsiL},\nnote={under review}\n}" }, "abstract": { "value": "Diffusion models have risen to prominence in time series forecasting, showcasing their robust capability to model complex data distributions. However, their effectiveness in deterministic predictions is often constrained by instability arising from their inherent stochasticity. In this paper, we revisit time series diffusion models and present a comprehensive framework that encompasses most existing diffusion-based methods. Building on this theoretical foundation, we propose a novel diffusion-based time series forecasting model, the Series-to-Series Diffusion Bridge Model ($\\mathrm{S^2DBM}$), which leverages the Brownian Bridge process to reduce randomness in reverse estimations and improves accuracy by incorporating informative priors and conditions derived from historical time series data. Experimental results demonstrate that $\\mathrm{S^2DBM}$ delivers superior performance in point-to-point forecasting and competes effectively with other diffusion-based models in probabilistic forecasting." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "time series forecasting; diffusion model" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/4087bf50c532889a275e543287df8467c428f536.pdf" }, "presentation": null, "primary_area": { "value": "learning on time series and dynamical systems" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/f00be8e99421c0fef9c2114d6346425f20b36f7d.zip" }, "title": { "value": "Series-to-Series Diffusion Bridge Model" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xCkgX4Xfu0
A Single Goal is All You Need: Skills and Exploration Emerge from Contrastive RL without Rewards, Demonstrations, or Subgoals
main
Active
exploration;emergent skills;contrastive reinforcement learning;open-ended learning
reinforcement learning
5;6;8
4;4;4
2;2;3
2;2;3
3;3;3
6.333333
4
2.333333
2.333333
3
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "How are the “difficult goals” sampled, how many are sampled, etc? Why are thy chosen in this way?\n\nAre there other experiments done (e.g. something like the perturbation test, or a test where the single-goal policy shows useful representations or reasonable success on goal states outside of those in the goal path) which would alleviate concerns that this method is simply overfitting to solve for this one goal?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Under the conditions set by the work, i.e. learning single difficult goals without rewards, demonstrations, curriculums, etc, the single-goal CRL clearly outperforms other methods. Furthermore, this method reduces assumptions compared to standard RL. Finally, an important finding of this paper is that a policy can be trained under contrastive learning to find a difficult goal without providing intermediate easier goals." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work aims to solve goal-conditioned RL (GCRL) tasks where only one goal is desired without access to rewards (dense or sparse), curriculums, or demonstrations. It proposes modifying standard Contrastive RL (CRL) by using only a single goal in rollouts while still using multiple goals while training. The paper runs experiments to show that the specific combination of single-goal rollouts, multi-goal training, and contrastive representations are needed to achieve good results. The paper also includes some additional analysis to examine single-goal CRL under impossible goals and to explore the representations learned by CRL." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Firstly, it is unclear how the authors selected or defined “difficult goals”. Are the difficult goals sampled by failure cases of CRL? How many difficult goals are the results evaluated on? For example, if this method is effective on one particular goal, but not others, then it would not be effective in practice.\n\nSecondly, the authors claim that the first ablation disproves the hypothesis that single-goal CRL “learns useful representations and an effective policy only for states along that goal path” (line 476). However, since the single-goal CRL fills the replay buffer with rollouts where the policy aims to reach goal $s^*$, and the actor is only being updated with states from the replay buffer, how does this prove that the single-goal rollout + multi-goal training policy is not overfit to/does not only contain good representations for states along the goal path?\n\nFinally, Fig. 11 shows that a monolithic critic CRL (Range of Difficulties) can reach approximately 60% success rate on the single hard goal on Sawyer Box after 100,000 steps. However, Fig. 3 shows that standard CRL (range of difficulties) can only achieve up to approximately 40% success rate on Sawyer Box after 1,000,000 steps. It seems this might contradict the assertion that the separate representations are important? (It is possible that the methodology for selecting the “difficult goals” is impacting these results.)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Do the authors have any statistics on eval performance over the same distribution of goals used to generate the replay buffer? One natural argument is that, in so far as each goal can be viewed as a separate task, the model will be best at the goal distribution that appears in the replay buffer. I believe this is distinct from the overfitting experiments, because in those experiments the final evaluation number is still only the final hard eval goal." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The fact that a single hard goal is sufficient with contrastive RL is certainly surprising, and the paper is upfront in not having a good explanation why this occurs. The fact it occurs over a few environments provides some evidence it is not simply a one-off occurrence." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors describe a curious phenomenon observed in some simulated robot manipulation environments used for RL. When using contrastive RL, using a single hard goal (one far away from the robot corresponding to task success) led to better learning outcomes than using a human designed curriculum of easy and hard goals.\n\nThis is observed across 4 environments, a Sawyer bin placing task, box placing task, peg insertion, and a 2d maze navigating task.\n\nThe authors provide a number of ablation experiments trying to study why this phenomenon occurs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I am suspicious that the results are too good - that there was no environment where using multiple goals performed best. Broadly my concern is that maybe these environments are too easy. If the model is able to succeed at the task given just the final hard goal, perhaps it's too hard to design a good dense reward or curriculum of goals to speed up learning. This wouldn't be too surprising, it's often been remarked that good partial reward design is difficult.\n\nI'm also not sure how \"single goal\" the final method is. In particular, Figure 10's caption was confusing to me. It seemed to suggest that in their method, the actor loss uses multiple goals, rather than a single goal? If so, this doesn't really seem like \"1 goal is all you need\". Essentially I think the authors may be overgeneralizing their conclusions.\n\nMy understanding so far is this:\n\n* A contrastive critic is learned via contrastive RL, defining reward by $\\phi(s,a)^T \\psi(s_f)$, where $s_f \\sim Geom(1-\\gamma)$ steps in the future\n* When generating data, we use a single hard goal $s^*$ and act according to $\\pi(a|s,s^*)$\n* When updating the actor, we sample a trajectory from the replay buffer of data generated according to $\\pi(a|s,s^*)$. But, for each initial $s$, we then sample $s_f \\sim Geom(1-\\gamma)$ within the trajectory, and apply gradient updates as if we collected data according to $\\pi(a|s,s_f)$, even though we actually collected data according to $\\pi(a|s,s^*)$.\n\nIn which case, at most you could say 1 goal is the only requirement for data collection, but the policy still needs to be trained on every intermediate goal observed to achieve good performance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Thank you for contributing this work. When reading the paper the emergent skill learning argument felt like it needed further justification. \n\n\n(1) I wished to clarify if the following pattern of skill emergence always held? The paper suggests yes but I wished to confirm this point. \n\n\"For example, in all three environments, the agent (1) first learns how to move its end-effector to\nvarying locations, (2) then learns how to nudge and slide the object, and (3) finally learns to pick up\nand direct the object. \"\n\n(2) I also wished to ask when you mention fixed sampling intervals for checkpoints, what fixed interval was used? Were there any exceptions to the skill learning trends identified in (1) (e.g. checkpoints with purely random behaviours) and if so how frequent were exceptions?\n\nIn answering (1) and (2) it would be useful to provide:\n\n(a) Video's indicating the progression of skill learning with details of the checkpoints, this would be helpful for validating the qualitative behaviours claimed within the paper. \n\n(b) Not entirely necessary but it would be nice to have quantitative metrics for characterising the various skills. For instance when you mention learning how to move the end-effector, distinguishing between purely random movements and directed movement would be useful, there are certain quantitative metrics that could be used, such as correlations between actions. Contrasting this with naive exploration approaches such as SAC may yield insights and further validate the skill learning claim. Similarly for learning to interact with objects and eventually pick them, characterising these skills quantitatively would be useful. \n\n(c) It may be possible to perform clustering on policy rollouts data to identify distinct exploratory behaviours as learning progresses. This would be a valuable addition." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The authors provide valuable insights into the effectiveness of training a policy conditioned on a single goal with contrastive reinforcement learning in the single-task setting. \n\n- The authors contribute to the discussion on skill learning through empirically demonstrating the emergence of skill learning with contrastive objective functions.\n\n- The authors validate each of their claims and provide discussions on counter arguments (e.g. overfitting single-task) that support their empirical results. \n\n- The authors provide sufficient information and high-quality materials to reproduce their results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work builds upon previous work in contrastive reinforcement learning. What's unique to this work is how their learning algorithm conditions on a single goal; this has implications on the distribution of data within the replay buffer which ultimately impacts the learning dynamics and exploration behaviour of the agent. The authors argue that this approach results in emergent skill learning without explicitly having to define a reward function or learning curriculum for the task being solved. This argument is supported with simulated experiments on robotic manipulation and maze traversing benchmarks. The performance of their approach is compared to contrastive reinforcement learning using a range of human-specified goal difficulties and existing approaches (SAC+HER and RIS). The authors justify their claim with empirical results, leading to the insight that contrastive reinforcement learning with conditioning on a single goal demonstrates advantageous exploration and skill learning behaviours relative to existing approaches that seek to learn effective exploration and skill learning strategies." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- There could be more analysis of emergent skill learning, in the current draft the authors mention that fixing a checkpoint and examining the qualitative behaviour of the policy they see evidence of skill learning, further justification of this claim is required in my opinion. \n\n- The authors highlight an awareness of this point but further analysis on why this strategy works would be useful. \n\n- It would be interesting to see if this approach results in exploration strategies that are capable of solving more complex tasks. For example tasks that encounter Sussman's anomaly, where naive subgoal design may not work well. I am unsure how this approach would perform but it would be promising if it did well and might further justify the strength of this more general approach to learning to solve tasks with reinforcement learning. Also tasks with significant bottlenecks in which naive exploration strategies are often incredible sample inefficient, does this approach help improve performance on such tasks?\n\n- While the authors address the concern of overfitting to the individual task being solved, they do not explore how well the learnt representations generalise. While there are examples of solving tasks with minor perturbations there isn't a discussion of more significant changes to the task environment. From a practical standpoint, it would be interesting to understand how well this approach can be used to learn value representations that generalise well across task environments. \n\n- This work suffers from ailments of reinforcement learning approaches more generally, most especially sample efficiency. It would be interesting to understand if the representations learnt in this work can be leveraged to address the issue of sample efficiency when learning multiple tasks (e.g. on the metaworld benchmark)." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Skills and Exploration Emerge from Contrastive RL without Rewards, Demonstrations, or Subgoals" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024a,\ntitle={A Single Goal is All You Need: Skills and Exploration Emerge from Contrastive {RL} without Rewards, Demonstrations, or Subgoals},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xCkgX4Xfu0},\nnote={under review}\n}" }, "abstract": { "value": "In this paper, we present empirical evidence of skills and directed exploration emerging from a simple RL algorithm long before any successful trials are observed. For example, in a manipulation task, the agent is given a single observation of the goal state (see Fig. 1) and learns skills, first for moving its end-effector, then for pushing the block, and finally for picking up and placing the block. These skills emerge before the agent has ever successfully placed the block at the goal location and without the aid of any reward functions, demonstrations, or manually-specified distance metrics. Once the agent has learned to reach the goal state reliably, exploration is reduced. Implementing our method involves a simple modification of prior work and does not require density estimates, ensembles, or any additional hyperparameters. Intuitively, the proposed method seems like it should be terrible at exploration, and we lack a clear theoretical understanding of why it works so effectively, though our experiments provide some hints." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "exploration", "emergent skills", "contrastive reinforcement learning", "open-ended learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/860dad66264a3d8e8e9b99d37f80c9d08731dfa7.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "A Single Goal is All You Need: Skills and Exploration Emerge from Contrastive RL without Rewards, Demonstrations, or Subgoals" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xDrFWUmCne
Learning to Discretize Denoising Diffusion ODEs
main
Active
Diffusion models;Efficient Sampling;Ordinary Differentiable Equations
generative models
6;6;6
4;4;2
3;3;3
3;2;3
3;3;3
6
3.333333
3
2.666667
3
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No ethics review." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I would like to hear the opinion of the authors on the concerns I raised in the weaknesses section." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is well-written and easy to follow. \n- It presents an easy solution to the sampling problem of diffusion models that only requires limited training time while obtaining. \n- The soft teacher loss is effective and simple to implement. \n- The evaluation is thorough and includes multiple models, multiple datasets, and multiple sampling strategies.\n\nIn general, I liked the paper and I lean toward acceptance. However, since this is not my area of expertise, I would wait for the discussion with the authors and other reviewers to increase the score to Accept and recommend borderline Accept for now." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a sampling method for diffusion models in order to reduce the sampling time required to generate an image. The paper proposes to learn the sampling steps from a teacher model that accurately solves the ODE by taking small step sizes. Extensive experiments show the effectiveness of the method while only requiring small amounts of training time." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Although I liked the paper, there are some concerns that, if addressed, would improve the paper. In the following paragraphs, I describe my concerns in detail:\n\n- In the table with the main results, sometimes it is not clear what the metrics are computed against. I suppose the metrics in table 2, 3, 4, and 5 are computed against random samples of the model using the accurate estimation of the ODE. However, if this metric is computed against the true distribution, the performance of the teacher with the accurate computation of the ODE should be shown (1000 steps). I think the evaluation protocol needs to be more clearly defined.\n\n- In a similar direction, Table 6 shows the performance of a teacher model using 8 steps. Why only 8 steps are used here? Would not the teacher use a higher number of samples?\n\n- The model used is quite simple being only composed of a single vector (or two in the decoupled version). From the results in Table 7, increasing the number of parameters leads to better results. Would increasing the complexity of the model lead to better results?\n\n- In the limitations section I found missing that the proposed method needs to be retrained for the target number of sampling steps. One model trained to generate images with 2 samples, would not be useful for 3 and a new model would need to be trained. This might be a problem since different images might necessitate a different number of steps to achieve good quality." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Can the invertibility assumption in Theorem 1 be relaxed and still achieve an upper bound on the KL divergence?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The LD3 algorithm is extremely lightweight, requiring only 100 samples and less than 1 hour on a single GPU to learn optimized sampling schedules.\n- The method is evaluated on a comprehensive set of pretrained models and compared against several baseline, showing improved quality in the majority of cases\n- A proper ablation study is done on the various choices/hyperparameters." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper tackles the challenge of optimizing timestep schedules for sampling in diffusion and flow-based generative models. The authors propose optimizing the schedule in order to minimize the global truncation error of multi-step sampling by reducing the discrepancy between a high-NFE teacher model and a low-NFE student model. They demonstrate that a relaxed version of this optimization problem yields even better results. Impressively, their proposed algorithm runs very quickly, achieving convergence in under 1 hour on a single GPU. A comprehensive evaluation is done using multiple pretrained diffusion models and ODE solvers. The proposed algorithm is compared against several hand-designed and learnt sampling schedules, and is shown to considerably improve the image quality in the low-NFE regime." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- There are several typos in the paper. See some examples below:\n - Algorithm 1 Line 6: $x'_T ← x'_T + ...$ must be $x'_T ← x_T + ...$\n - Line 251: $x'T \\rightarrow x'_T$\n - Line 251: $\\Psi*(x_T) \\rightarrow \\Psi_*(x_T)$\n- Theorem 1 requires more explanation on its invertibility assumption. Specifically, if the NFE is small, functions $\\Psi_*, \\Psi_\\xi$ invertibility is a non-trivial fact which requires some justification on its assumption. \n- The method relies on a learned perceptual distance (LPIPS) to achieve optimal results, as shown by the significant quality drop in Table 7 when switching to a standard Euclidean loss. This raises questions about how well the method might generalize to other data types beyond images." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Near the introduction, the paper suggests the approach should be seen as complementary to distillation methods as opposed to going head-to-head, but the goal of both said approaches and LD3 is to achieve good sample quality with the least number of steps possible. Why do the authors argue this is the case? It is not clear to me (and it is not demonstrated in the paper) that distillation methods are compatible with optimizing the timesteps used, e.g. progressive distillation is trained to match two steps of the teacher with the student, so it's not clear a priori whether changing the timesteps later will completely break a model distilled this way." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper includes proofs of soundness of their proposed minimization objectives, going beyond purely empirical contribution.\n- The number of experiments is substantial across both datasets, baselines from prior work, and choice of pretrained models.\n- The experiments conducted include notoriously difficult datasets in the literature of diffusion step reduction like ImageNet, and shows improvement in more complex settings such as a text to image model.\n- The objective is cheap to train compared to prior work; the key being that a very low batch size of 2 is permissible to use.\n- Ablation studies demonstrate the importance of the proposed changes separately.\n- The samples presented qualitatively look very reasonable and show clear improvement over the usual hand-crafted timestep schedules, and they fix the random seed so the same samples can be comapred." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes LD3, a method to discover timesteps that will yield good samples for diffusion models when the number of forward passes is very small at inference time, through optimization. The authors propose a distillation-like objective where a global notion of distance between the original model and a student model with learned timesteps is minimized. Two variants are presented, one \"soft\" objective achieving best results that only requires samples to be within a ball of hyperparameter radius $r$ to the teacher's samples. The soft objective additionally is linked theoretically to also upper bounding the KL divergence between the two models. Various experiments are conducted to demonstrate that LD3 achieves better FID scores than prior methods on few-step sampling, and an ablation study is included to demonstrate the importance of different components of LD3, the most important seemingly being the decoupling of timestep choice and step size, and the use of a perceptual distance (LPIPS) as opposed to pixel-based L2." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are two main areas around which the paper could be much stronger. The first is in comparisons to distillation methods, which are among the strongest in the literature. The paper includes a comparison to progressive distillation and consistency distillation in Table 9, but it is really difficult to compare these methods apples-to-apples. There are details missing (please correct me if I missed these e.g. in the supplementary material) such as what models were compared and where are the baseline scores taken from; ideally the same model should be post-trained with the different techniques. The number of forward passes across methods also doesn't match, making it difficult to draw any conclusions. One conclusion that can be drawn, however, is that progressive distillation remains better than LD3 in FID score at NFE=8, albeit requiring much more compute to distill.\n\nThe other major weakness is the lack of careful qualitative comparisons to other step reduction methods. The vast majority of the qualitative samples are compared to hand-crafted schedules, which are the weakest baselines. This is really important, especially because prior work has shown that very low FID scores can be achieved somewhat adversarially, resulting in strange samples (e.g., consider the CIFAR10 samples in the GGDM paper), so quantitative results are insufficient to truly demonstrate that LD3 improves over all prior work. Careful side-by-side comparisons of different step reduction methods, derived from the same pre-trained model and using the same initial noise and matching NFE would be significantly more convincing.\n\nOverall, the work is strong, and the quantitative results already put this paper as a valuable contribution to the literature that should be accepted at the conference. I am opting for a weak accept, because the comparisons to distillation methods seem improper and incomplete, and the qualitative comparisons require more care. But even if so, due to the very low cost of the proposed technique and the achieved scores, the work already has intrinsic value. I strongly encourage the authors to address the concerns outlined above as it would make the work excellent." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning to Discretize Denoising Diffusion {ODE}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xDrFWUmCne},\nnote={under review}\n}" }, "abstract": { "value": "Diffusion Probabilistic Models (DPMs) are generative models showing competitive performance in various domains, including image synthesis and 3D point cloud generation. Sampling from pre-trained DPMs involves multiple neural function evaluations (NFE) to transform Gaussian noise samples into images, resulting in higher computational costs compared to single-step generative models such as GANs or VAEs. Therefore, reducing the number of NFEs while preserving generation quality is crucial. To address this, we propose LD3, a lightweight framework designed to learn the optimal time discretization for sampling. LD3 can be combined with various samplers and consistently improves generation quality without having to retrain resource-intensive neural networks. We demonstrate analytically and empirically that LD3 improves sampling efficiency much less computational overhead. We evaluate our method with extensive experiments on 7 pre-trained models, covering unconditional and conditional sampling in both pixel-space and latent-space DPMs. We achieve FIDs of 2.38 (10 NFE), and 2.27 (10 NFE) on unconditional CIFAR10 and AFHQv2 in 5-10 minutes of training. LD3 offers an efficient approach to sampling from pre-trained diffusion models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Diffusion models", "Efficient Sampling", "Ordinary Differentiable Equations" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/ec79ba5865f4c12903666e09b0b7ad920bd6b6d5.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Learning to Discretize Denoising Diffusion ODEs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xE3Ra2GTpX
Multi-Grained Knowledge for Retrieval-Augmented Question Answering on Hyper-long Contexts
main
Active
Knowledge-based Question Answering;Retrieval-Augmented;Large Language Model Generation;Information Extraction;Hyper-long Contexts
generative models
3;3;5;5
5;3;3;4
2;2;2;1
2;3;2;2
3;3;2;3
4
3.75
1.75
2.25
2.75
-0.301511
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See \"Weaknesses\"." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper proposes MKRAG (Multi-grained Knowledge Retrieval-Augmented Generation) for hyper-long context question answering. By integrating multi-grained entity graphs with an iterative retrieval and reasoning process, MKRAG addresses the limitations of traditional models constrained by context window size.\n2. This paper introduces LoopAgent, an iterative retrieval framework that progressively refines queries across multiple retrieval cycles. By incorporating advanced reasoning capabilities, LoopAgent improves both retrieval and answering accuracy and mitigates information loss in traditional single-pass retrieval methods, particularly in complex multiple-entity scenarios." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a multi-grained entity graph-based QA method that constructs an entity graph and dynamically combines both local and global contexts, capturing information across three granularity levels: micro, feature, and macro levels, and incorporates iterative retrieval and reasoning mechanisms to generate accurate answers for hyper-long contexts. Evaluation results on LongBench and InfiniteBench demonstrate the effectiveness of the approach, significantly outperforming existing methods in both the accuracy and granularity of the extracted answers, and it can be deployed in online novel-based applications." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The entities and their associated attributes, relationships, and events are extracted by LLMs. However, as noted in previous work, LLMs may fall short in information extraction (IE) tasks, such as entity extraction, relation extraction, and event extraction. If LLMs cannot handle IE well, errors could propagate through the system, leading to random, unpredictable, and non-generalizable outcomes. It could be better if the authors provide evaluation on the reliability of the IE process.\n2. While the model of experiments is based on ERNIE, there lack a comparison with the other ERNIE variants capable of handling longer inputs, such as ERNIE-Turbo-128K. Including a comparison with models from the same series would strengthen the paper by better demonstrating the effectiveness of the proposed approach." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.\tHow is \"importance weight\" defined in equation (6)? More explanation of this part would be helpful.\n2.\tWhat is the module performance in each subtask? e.g., the recall rate of entities, accuracy of entity aggregation, effectiveness of entity pruning, effectiveness of thresholds in equation (10), and more?\n3.\tWhat is the average number of rewrites for each question (line 401)? How is the computation cost for the LoopAgent?\n4.\tThe hyter-long context QA benchmark, InfiniteBench, has an average output of 6.3 tokens, which should be only a few words. Does these questions really need query-decomposition described in line 383?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. The EntiGraph module effectively enhances the representation and may have the potential to extract most necessary information from the document.\n2. The multi-grained knowledge generation from entity graph helps capture possible QA from different granularity." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "To address the challenges of hyper-long context QA, particularly the limitations of context windows and retrieval errors in retrieval-augmented generation (RAG) due to inadequate semantic representation, this paper proposes Multi-grained Knowledge RAG Method (MKRAG). MKRAG involves extracting entities from context with EntiGraph chunk-by-chunk, generating multi-grained QA pairs, and iteratively retrieve related information by refining queries to get the final answer.\n\nResults show that the proposed method achieve on par or slightly better performance compared with SOTA models/methods, and achieves high performance gain in the online test." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThis paper claims that the EntiGraph module enables accessing nearly all the necessary information. However, the performance of EntiGraph is not discussed.\n2.\tIn the Entity Pruning section, how the threshold $\\tau$ is selected is not explained. There are also pre-defined thresholds in (10). The selection of thresholds are not justified by detailed analysis of module performance.\n3.\tBy turning the document into entity graphs and generating knowledge accordingly, the model risks missing information during these processes, and the agent may be unable to answer the question given the insufficient information.\n4.\tIn the ablation study section, “Chuck Retrieval + Baseline” setting uses a chunk size of 500 tokens/chunk while the context window limit is 4k. It is not clear what is the chunk size used in MKRAG and if the small chunk size in the previous setting limits its performance.\n5.\tThis paper claims that the proposed method demonstrating high accuracy in Online Test and high user satisfaction rate. However, details of the tests are not provided and the results are not compared against other SOTA models/methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1.Authors have not clearly stated the key innovations of this paper. Authors to explicitly state their key innovations and provide a clear comparison to existing methods, highlighting specific novel aspects of their approach, e.g., a specific new method or a new framework of well incorporating existing techniques.\n2. The multi-grained entity graph and iterative retrieval, while effective, could be computationally intensive, limiting scalability in resource-constrained environments. Therefore, authors should conduct time and space complexity analyses, as well as perform corresponding experiments for verification, such as the Inference Time and the Time per Iteration.\n3.Some experimental details are missing, such as hardware specifications, number of iterations, the stopping condition for iterative retrieval, and the convergence criteria for all models.\n4. About \" we employ context aggregation algorithms to integrate both local and global contexts of the same entity, and utilize an LLM, EntiGraph, to generate multi-grained QA pairs (i.e., micro-level, feature-level, and macro-level). This multi-level strategy not only ensures that the model produces highly accurate answers for complex queries, but also mitigates the fragmentation of information that often hampers traditional methods\", 1) the \"EntiGraph\" is a LLM model? 2) why generate multi-grained QA pairs from micro-level, feature-level, and macro-level, and what are the insights about these? 3) why this multi-level strategy can ensure high accurate answers? 4) does this strategy lead to much noise?\n5. The model is relatively complex, and the paper fails to clearly present the training and optimization process of the model. 1)How to perform joint optimization among multiple modules, i.e., ITERATIVE RETRIEVAL AGENT, MULTI-GRAINED REPRESENTATION, ENTITY EXTRACTION? How is the training data for each module constructed? For each entity, how to construct the embeddings for its attributes, relationships, events and temporal information?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.The MKRAG model captures the local and global information in the text by constructing multi-grained entity graphs (including micro, feature and macro levels), and can generate more accurate answers than traditional RAG methods.\n2. Overall well written and easy to understand.\n3.Iterative retrieval agent, MKRAG model uses LoopAgent iterative retrieval mechanism to refine the query through multiple rounds of retrieval to alleviate the problem of information fragmentation encountered by traditional methods in dealing with ultra-long text.\n4.In this paper, based on the hyper-long context question and answer field, the experiments not only aim at the long text, but also verify that the proposed model is also applicable in the short text." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, a model called MKRAG is proposed to deal with hyper-long context QA tasks. Through multi-grained knowledge generation and iterative retrieval agent, MKRAG model can effectively extract and integrate information, and thus improves the accuracy of question answering. The experimental results show that the MKRAG model achieves excellent performance on multiple benchmark datasets, and shows a strong ability of long text understanding and multi-domain question answering." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.Authors have not clearly stated the key innovations of this paper. Authors to explicitly state their key innovations and provide a clear comparison to existing methods, highlighting specific novel aspects of their approach, e.g., a specific new method or a new framework of well incorporating existing techniques.\n2. The multi-grained entity graph and iterative retrieval, while effective, could be computationally intensive, limiting scalability in resource-constrained environments. Therefore, authors should conduct time and space complexity analyses, as well as perform corresponding experiments for verification, such as the Inference Time and the Time per Iteration.\n3.Some experimental details are missing, such as hardware specifications, number of iterations, the stopping condition for iterative retrieval, and the convergence criteria for all models.\n4. About \" we employ context aggregation algorithms to integrate both local and global contexts of the same entity, and utilize an LLM, EntiGraph, to generate multi-grained QA pairs (i.e., micro-level, feature-level, and macro-level). This multi-level strategy not only ensures that the model produces highly accurate answers for complex queries, but also mitigates the fragmentation of information that often hampers traditional methods\", 1) the \"EntiGraph\" is a LLM model? 2) why generate multi-grained QA pairs from micro-level, feature-level, and macro-level, and what are the insights about these? 3) why this multi-level strategy can ensure high accurate answers? 4) does this strategy lead to much noise?\n5. The model is relatively complex, and the paper fails to clearly present the training and optimization process of the model. 1)How to perform joint optimization among multiple modules, i.e., ITERATIVE RETRIEVAL AGENT, MULTI-GRAINED REPRESENTATION, ENTITY EXTRACTION? How is the training data for each module constructed? For each entity, how to construct the embeddings of its attributes, relationships, events and temporal information?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See above" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The multi-grained QA method integrates a graph-based representation of entities and iterative retrieval, allowing the model to retain contextual coherence and retrieve relevant answers from hyper-long contexts.\n2. The inclusion of LoopAgent, an iterative retrieval mechanism, improves QA accuracy by refining the search process across multiple rounds, which is particularly beneficial for complex and nuanced queries.\n3. The model shows state-of-the-art performance on both hyper-long and moderately long datasets, outperforming baseline models like GPT-4 and other RAG-based approaches on LongBench and InfiniteBench.\n4. The model has been effectively deployed in real-world applications, such as online novel-based platforms, and demonstrates enhanced scalability and practical utility in handling real-time queries." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses a challenge in hyper-long context QA, where retrieving precise answers from extensive and dispersed content poses substantial obstacles. Current approaches often suffer from limitations, such as input-length constraints in LLMs and semantic loss in RAG systems. The authors propose a multi-grained entity graph-based QA framework, termed MKRAG, that operates across three granularity levels—micro, feature, and macro—to improve information extraction and reasoning across hyper-long texts. Their framework also introduces an iterative retrieval mechanism, LoopAgent, designed to refine retrievals and improve accuracy through multiple rounds. The evaluations across datasets show that MKRAG achieves state-of-the-art results, particularly excelling in scenarios with high granularity requirements, such as long-tail or detail-oriented queries." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Although the model circumvents some LLM limitations, the use of a multi-granularity framework and iterative retrieval adds complexity and computational demands, which may be prohibitive for broader real-time applications. How to evaluate the efficiency of the proposed method?\n2. The approach heavily relies on accurate entity extraction and structured graph relationships. In cases where entity relationships are sparse or ambiguous, the model's performance may degrade. Do the authors test other entity extraction methods other than EntiGraph?\n3. The paper should further discuss trade-offs between different granular levels and how the system decides the optimal level of granularity in real time, especially in cases with sparse information.\n4. There are many prompt/context compressing methods that should be included as baselines:\n4.1. Jiang, Huiqiang, et al. \"Llmlingua: Compressing prompts for accelerated inference of large language models.\" arXiv preprint arXiv:2310.05736 (2023).\n4.2. Jiang, Huiqiang, et al. \"Longllmlingua: Accelerating and enhancing llms in long context scenarios via prompt compression.\" arXiv preprint arXiv:2310.06839 (2023).\n4.3. Pan, Zhuoshi, et al. \"Llmlingua-2: Data distillation for efficient and faithful task-agnostic prompt compression.\" arXiv preprint arXiv:2403.12968 (2024).\n4.4. Li, Yucheng, et al. \"Compressing context to enhance inference efficiency of large language models.\" arXiv preprint arXiv:2310.06201 (2023)." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This paper presents a Multi-Grained Retrieval-Augmented Generation (MGRAG) method for hyper-long context question answering, integrating multi-grained entity graph with iterative retrieval and reasoning." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024multigrained,\ntitle={Multi-Grained Knowledge for Retrieval-Augmented Question Answering on Hyper-long Contexts},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xE3Ra2GTpX},\nnote={under review}\n}" }, "abstract": { "value": "In the task of hyper-long context question answering (QA), a key challenge is extracting accurate answers from vast and dispersed information, much like finding a needle in a haystack. Existing approaches face major limitations, particularly the input-length constraints of Large Language Models (LLMs), which hinder their ability to understand hyper-long contexts. Furthermore, Retrieval-Augmented Generation (RAG) methods, which heavily rely on semantic representations, often experience semantic loss and retrieval errors when answers are spread across different parts of the text.\nTherefore, there is a pressing need to develop more effective strategies to optimize information extraction and reasoning. \nIn this paper, we propose a multi-grained entity graph-based QA method that constructs an entity graph and dynamically combines both local and global contexts. Our approach captures information across three granularity levels (i.e., micro-level, feature-level, and macro-level), and incorporates iterative retrieval and reasoning mechanisms to generate accurate answers for hyper-long contexts.\nSpecifically, we first utilize EntiGraph to extract entities, attributes, relationships, and events from hyper-long contexts, and aggregate them to generate multi-granularity QA pairs. Then, we retrieve the most relevant QA pairs according to the query. Additionally, we introduce LoopAgent, an iterative retrieval mechanism that dynamically refines queries across multiple retrieval rounds, combining reasoning mechanisms to enhance the accuracy and effectiveness of answering complex questions.\nWe evaluated our method on various datasets from LongBench and InfiniteBench, and the experimental results demonstrate the effectiveness of our approach, significantly outperforming existing methods in both the accuracy and granularity of the extracted answers. Furthermore, it has been successfully deployed in online novel-based applications, showing significant improvements in handling long-tail queries and answering detail-oriented questions." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Knowledge-based Question Answering", "Retrieval-Augmented", "Large Language Model Generation", "Information Extraction", "Hyper-long Contexts" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a439494416d2ea379ddf8350a49fa98f3d4a6eb9.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Multi-Grained Knowledge for Retrieval-Augmented Question Answering on Hyper-long Contexts" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xE5ZaZGqBW
Hypercone Assisted Contour Generation for Out-of-Distribution Detection
main
Active
OOD detection;Out-of-distribution detection;Computer Vision;Deep Learning;Representation Learning
other topics in machine learning (i.e., none of the above)
3;5;6;6
3;3;4;4
2;3;3;3
2;3;3;3
1;3;4;3
5
3.5
2.75
2.75
2.75
0.816497
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See Weakness." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "HACk-OOD introduces a unique method using hypercone projections to delineate class contours, avoiding traditional Gaussian distribution assumptions and offering greater flexibility in complex feature spaces. The method achieves competitive, often superior, results on challenging datasets like CIFAR-100, demonstrating strong performance in both near and far OOD detection scenarios." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "HACk-OOD is a post-training OOD detection method using hypercone projections to construct class-specific contours in embedding space. The approach achieves SOTA performance on CIFAR-100 and improves with larger networks. Including Imagenet experiments could further demonstrate scalability on large datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Experiments are limited to CIFAR-based datasets, testing on a large-scale dataset like Imagenet would better validate the method’s scalability. Also evaluating HACk-OOD on the OpenOOD benchmark would provide a clearer comparison to recent methods. I would consider rating this paper higher if Imagenet results were provided.\n\n2. Missing comparisons with some of the latest post-hoc OOD methods, such as ASH and SCALE. Including these would offer a more comprehensive assessment of its relative performance.\n\nDjurisic, Andrija, et al. \"Extremely simple activation shaping for out-of-distribution detection.\" ICLR 2022\nXu, Kai, et al. \"Scaling for training time and post-hoc out-of-distribution detection enhancement.\" ICLR 2023" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the weaknesses part." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The method takes an interesting approach to distance-based OOD detection by relaxing the distributional assumptions and, unlike naive KNN, still leveraging nearby training data statistics to construct class contours. To the best of my knowledge, the use of hypercones for this purpose is novel and appears well-motivated.\n\n2. The authors present their method clearly, making the paper easy to follow and understand.\n\n3. Although the method involves hyperparameter k, the authors provide a practical approach to estimating it without requiring additional OOD data.\n\n4. The experiments investigate various model sizes and training losses, demonstrating their impact on distance-based methods. Overall, the method benefits more from larger models trained with contrastive loss, as these models produce more distinguishable features." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a post-training out-of-distribution (OOD) detection method, HAC_k-OOD, which models the training data distribution through a set of hypercones and assesses OOD status based on whether a test sample falls within any hypercone. Specifically, for each class, the method first computes the class centroid and defines an angular boundary using the k-th nearest neighbors for each training point. Additionally, a radial boundary is set based on the mean and variance of the sample norms within the angular boundaries. During inference, a sample is classified as OOD if it lies outside either the angular or radial boundaries. Experiments were conducted using ResNet-18, ResNet-34, and ResNet-50, with both supervised contrastive learning and cross-entropy loss. The method was evaluated on CIFAR-10/100 as the in-distribution dataset and tested on various OOD datasets, covering both near and far OOD scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The computational complexity is a concern for this method. Since the computation appears to increase with the size of the training dataset, it’s unclear if this approach would be feasible for large-scale, real-world applications. Although the authors state that the method is computationally efficient and support this with inference time per sample, I encourage them to provide a more detailed discussion on this aspect. For instance, what is the time required to construct the hypercones? A comparison of inference times with other methods would also be valuable.\n\n2. How would the method perform on a large-scale, real-world dataset like ImageNet? Many recent OOD detection methods use ImageNet-1k as the in-distribution (ID) dataset. I encourage the authors to consider experiments on this dataset to evaluate the general applicability of the method in more realistic scenarios.\n\n3. Recent work has explored OOD detection using CLIP as a backbone model (eg. [a1]), as CLIP may offer a more robust feature space. It would be interesting to see how this method performs when applied to a CLIP-based model.\n\n4. Could the authors elaborate on why this method outperforms a naive KNN approach? One advantage seems to be that the method leverages the nearest neighbors within the training set (as opposed to KNN’s i.i.d. approach) to construct hypercones, which may capture more robust information about class boundaries. Additionally, an ablation study using either angular or radial boundaries separately for OOD detection could provide valuable insights into the method’s effectiveness and support future research.\n\n5. While the paper is generally well-written, a few sections could be clearer. For example, in lines 80-81, P_in is referenced without being introduced in a previous formula. Additionally, in Section 5.2, only ResNet-34 is mentioned as the backbone model, though ResNet-18 and ResNet-50 are also used.\n\n[a1] Ming, Yifei, et al. \"Delving into out-of-distribution detection with vision-language representations.\" Advances in neural information processing systems 35 (2022)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- (see Weakness 1): how does the method perform on the ImageNet (200 or 1k) OOD benchmarks? Its evaluation on benchmarks beyond *quite simple* datasets (i.e., CIFAR-10/CIFAR-100) would strengthen the claims. \n- What is the intuition behind the use of hypercones? Isn't the embedding space of a class more \"dense\" close to its centroid?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-written and easy to follow. The authors provide a good summary of the different approaches to OOD, a background section on hypercones, and a clear and precise method description. \n2. Relevance and novelty of the method: the algorithm doesn't require assumptions about the data distribution and can model complex embedding spaces since it draws multiple hypercones per class and since it defines per-hypercone decision boundaries. \n3. The authors discuss some limitations of the method (e.g., it works less well with smaller models )." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a post-training method for out-of-distribution (OOD) detection. The method approximates the contour of each class with a set of hypercones and defines per-hypercone decision boundaries. \nMore specifically, the hypercones are drawn as follows: \n1. compute per-class centroids, which will be the apex of the class hypercones \n2. take a class sample and set the hypercone axis to be the vector that points its (penultimate) representation \n3. set the opening angle to be the angle between the hypercone axis and the k-th nearest neighbor from the sample representation\n4. set the decision boundary using the distribution of representations within the hypercone's angular boundary\n\nThe method is an extension of another technique, SSD+, which assumes the decision boundary could be modeled with a unique hypersphere (or multidimensional ellipsoid) per class. It's training does not require OOD data. \n\nExperimentally, the authors follow common practice and evaluate the OOD performance with the CIFAR-100 dataset as an in-distribution dataset and many different datasets as out-of-distribution datasets. Two types of pre-trained classification models are tested: models trained with a softmax cross-entropy loss and with a supervised contrastive loss. \nThe method achieves SOTA results in the supervised contrastive learning setup and is competitive in the cross-entropy setting." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Limited evaluation: the method is only evaluated on models pre-trained on the *quite simple* CIFAR datasets and not on more complex datasets such as the ImageNet-200 or ImageNet-1k OOD benchmarks." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please address the questions in the weaknesses part as above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper addresses an important problem and proposes a new distance-based model based on hypercones. \n2. The evaluation of the method seems comprehensive and it achieves strong results in some cases." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a post-training strategy for Out-Of-Distribution (OOD) detection for image classification. The proposed method assumes no access to the OOD samples and employs a set of hypercones with varying cutoff distances in feature space to define the class boundaries of in-distribution data. The work evaluates this method and a combination with a previous OOD technique on the Far-OOD and near-OOD detection benchmarks, with comparisons to a set of baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper is not clearly motivated. The discussion on training-based and distance-based post-training methods are insufficient. While the introduction section listed many previous methods, it is unclear what OOD modeling challenges this method aims to address, in particular for the distance-based approach.\n2. The assumption of this method is very restrictive. As stated in Line 064, it requires \"that ID and OOD data are separable in the space\", which is unrealistic for real-world data. \n3. The novelty of this method is limited. The proposed hypercone representation is similar to a mixture of Gaussian kernels for the ID data distribution. \n4. The presentation of this work lacks clarity and the technical details are difficult to follow. Several parts of Sec 4.3 are confusing: 1) Line 260: Why do the hypercone representations rely on the test data feature Z_{test}, which should not be used during model construction? 2) Line 299: How is the score function defined and what threshold is used during the inference (in Sec 4.4)? \n5. The experimental evaluation is lacking in three aspects: 1) The experimental setup is limited, which only considers three ResNet-based backbones. More modern architectures, such as ViT, should be included to validate its generalization. 2) The ablative study is lacking. What contributions are from the hypercones? What if it is replaced by Gaussians? 3) The performance of the original version HAC_k-OOD is mixed in both Table 1 and 2, and in most of cases, it is worse than the SOTA methods. It is unclear whether the proposed representation is truly effective." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "HACk-OOD is a novel, training-agnostic OOD detection method that constructs hypercones in the feature space to approximate in-distribution contours of classes, without making distributional assumptions or explicitly training for OOD detection." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024hypercone,\ntitle={Hypercone Assisted Contour Generation for Out-of-Distribution Detection},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xE5ZaZGqBW},\nnote={under review}\n}" }, "abstract": { "value": "Recent advances in the field of out-of-distribution (OOD) detection have placed great emphasis on learning better representations suited to this task. While there have been distance-based approaches, distributional awareness has seldom been exploited for better performance. We present HACk-OOD, a novel OOD detection method that makes no distributional assumption about the data, but automatically adapts to its distribution. Specifically, HACk-OOD constructs a set of hypercones by maximizing the angular distance to neighbors in a given data-point's vicinity, to approximate the contour within which in-distribution (ID) data-points lie. Experimental results show state-of-the-art FPR@95 and AUROC performance on Near-OOD detection and on Far-OOD detection on the challenging CIFAR-100 benchmark without explicitly training for OOD performance." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "OOD detection", "Out-of-distribution detection", "Computer Vision", "Deep Learning", "Representation Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/dd3d79734542f30802682964c28d7108aeded14a.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Hypercone Assisted Contour Generation for Out-of-Distribution Detection" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xEDB5sSIK0
Label Informativeness-based Minority Oversampling in Graphs (LIMO)
main
Active
class imbalance;graph neural networks;mutual information;label informativeness
learning on graphs and other geometries & topologies
3;5;5;5;6
4;4;4;4;4
2;2;3;2;3
1;2;2;3;3
2;2;3;2;3
4.8
4
2.4
2.2
2.4
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Please address the weaknesses mentioned above.\n2. In the evaluation, is the accuracy of the prediction computed using the augmented graph (after upsampling), or is it just based on the original graph?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Interesting ideas on using label informativeness to help tacking imbalance dataset.\n2. The is built on top of SMOTE, a well known upsampling technique, and augment it to graph setting.\n3. The experiments demonstrate the claimed benefit of the proposed approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents an oversampling technique for imbalance classification in graph classification settings. The method is based on the label informativeness as the criteria to augment the minority class samples. The method started with the regular non-graph SMOTE technique for generating new samples. Then it uses LI criteria to determine which nodes are connected to the new samples, in such a way that maximizes the LI criteria. The authors then demonstrate the performance of the proposed model in the real experiments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Presentation. In many places, terminologies/abbreviations/symbols/equations are used without explaining them. For examples:\n - “IR”. (Page 2)\n - “SMOTE”. (Page 3)\n - “Eq (6)”. (Page 4)\n - “Eq (7)”. (Page 4)\n - etc\n2. Motivation. The motivation behind using LI for upsampling is not explained clearly. Why is using LI a good idea to do upsampling? Why is having high LIs desirable? etc.\n3. Soundness. The data augmentation by creating new nodes with new features using SMOTE technique makes sense. However, the way the proposed method augments the edges by attaching the new nodes to potentially unrelated nodes (not a neighbor of the original node), does not necessarily make sense. I get that the goal is to improve the LI metric. However, edges in the original graph represent certain relationship patterns. Attaching the new nodes to unrelated nodes will create new relationship patterns that do not exist in the original graph. These relationship patterns in the graph are something that is not yet explored by the model." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Compared to existing oversampling methods based on graph data, what research gaps are addressed by the method proposed in this paper? \n2. The authors proved Theorem 1 in the manuscript. Still, I am more interested in whether the variation of t for different problems affects the results of the experiments and how the optimal value of t is determined.\n3. Is there a generalization of sacrificing significant computational cost to improve computational accuracy?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The authors propose Label Informativeness-based Minority Oversampling and theoretically establish the relationship between label informativeness and model prediction accuracy in GNN. \n2. The authors analyze the effect of variation in the number of inter- and intra-class edges on LI. \n3. The manuscript is logical and well-structured. The methodology section is clearly presented." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a new method for resolving class imbalances in graph-structured data based on a labeled informativeness-based minority oversampling method, called LIMO. By increasing the edges in a way that maximizes the amount of label information, LIMO strategically samples nodes of minority classes. And satisfactory results are obtained in node categorization dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. At the end of the first section, the authors show LIMO only in the figure, but there is no corresponding textual description in the text.\n2. The currently proposed Label Informativeness-based Minority Oversampling (LIMO) approach is indeed a novel idea to solve the graph imbalance problem, but the core differences with existing approaches can be further highlighted in the introduction section or related work section such as 10.1109/IJCNN60899.2024.10650494, 10.1109/ICASSP48485.2024.10448064. \n3. Discussion on the generalizability of LIMO is lacking in the manuscript, the authors only validate the performance effect of LIMO in GraphSAGE and lack discussion on LIMO in other GNNs such as Graph Convolutional Networks (GCN), Graph Attention Networks (GAT/ GAN). The authors need to provide a theoretical discussion of how LIMO might be generalized to other GNN architectures, or validated experimentally. \n4. The experimental results are not comprehensive enough, and oversampling methods that specifically address the problem of graph data imbalance are missing from the comparison methods. Therefore, it is suggested that the authors should add methods that target graph data correlation in the experimental section, e.g., 10.1109/IJCNN60899.2024.10650494, 10.1109/ICASSP48485.2024.10448064.\n5. Although the authors mention the inherent cost of the LIMO approach in generating and evaluating new edges in the summary section, the authors do not mention whether the added computational cost is within an acceptable range. The authors should provide actual runtime comparisons between LIMO and baseline methods, or a theoretical complexity analysis." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is easy-to-follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "To alleviate the class-imbalanced issue, this paper propose a novel algorithm, Label Informativeness-based Minority Oversampling (LIMO), aiming to strategically synthesize minority nodes by augmenting edges to maximize label informativeness. And the experiments conducted on various homophilous and heterophilous benchmark datasets show the improvements compared with the baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The novelty is limited. It seems just simply inject SMOTE with label informativeness, however, the label informativeness is borrowed from previous work without clear explanation. Specifically, what is the meaning of the definition of LI in Eq.2? What situation will the LI(G) increase? And when will LI(G) decrease? More importantly, how does LI(G) influence the performance of class-imbalanced learning?\n\n2. This paper fails to demonstrate its strengths compared with previous works. In lines 81-87, the authors claim that the proposed method take label informativeness to handle class-imbalanced issue. But they fail to illustrate the limitations of previous works, and how the proposed method can tackle the limitations? Why does the previous works, like GraphSMOTE [1], GraphENS[2] and TAM[3], can not improve the label informativeness? What is the strengths of LIMO compared with these methods?\n\n3. In line 152, during the node generation, LIMO need to find \"its nearest neighbor $u$ in the same class\". Does node $u$ belong to the labeled training set or the unlabeled nodes? If it comes from the training set, will it lead to overfitting when the number of minority classes is too small? If $u$ is an unlabeled node, how to get a reliable pseudo-label in the class-imbalanced scenario?\n\n4. Lack of complexity analysis of the proposed method. In lines 11-13 of Algorithm 1, for each minority node $v$, LIMO need to determine the edge between $s$ and $w$, and $w \\in \\mathcal{V}-\\{v\\}$. The computation cost seems pretty high.\n\n5. In line 216, \"Specifically, it suggests that enhancing the connectivity between classes (increasing inter-class edges) can improve the informativeness of the labels\". This conclusion seems to contradict graph homophily [4] (the foundation of GNN), i.e., the connected nodes often come from the same class. In my opinion, we always want to add edges between two nodes from the same class, so that the nodes can aggregate more information from the same class, thus helping the node classification, just like GraphSMOTE[1] does. I have some doubt about the conclusion of line 216.\n\n6. In addition, the baselines is out-of-date, there are many related works should be taken into comparison, for example, the oversampling methods like GraphENS[2] and GraphSHA[5], the pseudo-labeling method like GraphSR[6] and the loss adjustment method TAM[3]. The experimental setup is inappropriate. In line 729, \"When the minority class had fewer than three nodes, we allocated one node each for training, validation, and testing.\", It is unreasonable to have only one node for validation and testing. Why is the experimental setting not consistent with the previous method?\n\n\n\n[1] Zhao T, Zhang X, Wang S. Graphsmote: Imbalanced node classification on graphs with graph neural networks[C]//Proceedings of the 14th ACM international conference on web search and data mining. 2021: 833-841.\n\n[2] Park J, Song J, Yang E. Graphens: Neighbor-aware ego network synthesis for class-imbalanced node classification[C]//International conference on learning representations. 2021.\n\n[3] Song J, Park J, Yang E. TAM: topology-aware margin loss for class-imbalanced node classification[C]//International Conference on Machine Learning. PMLR, 2022: 20369-20383.\n\n[4] McPherson M, Smith-Lovin L, Cook J M. Birds of a feather: Homophily in social networks[J]. Annual review of sociology, 2001, 27(1): 415-44\n\n[5] Li W Z, Wang C D, Xiong H, et al. Graphsha: Synthesizing harder samples for class-imbalanced node classification[C]//Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2023: 1328-1340.\n\n[6] Zhou M, Gong Z. GraphSR: a data augmentation algorithm for imbalanced node classification[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2023, 37(4): 4954-4962." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "* In the experiments, what does the imbalance ratio represent specifically set within the range of 0.1 to 0.6?\n* Different dataset seems to show different performance. For example, in Table 10 when imbalance ratio = 0.1/0.2, while the LI of LIMO is the largest, the performances of RW and GS are better. Why?\n* The experiment results can be further explained. For instance, in Figures 5 and 10, no significant correlation is observed once the fraction exceeds 0.25. Additionally, in Table 8, when the imbalance ratio is set to 0.1, ACC and F1-Score remain the same across four baseline methods.\n* It is unclear whether the effectiveness stems more from well-synthesized node features or from the generated edges. Which one is more important? What are the appropriate $\\delta$ values for interpolating node features?" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* In the experiments, what does the imbalance ratio represent specifically set within the range of 0.1 to 0.6?\n* Different dataset seems to show different performance. For example, in Table 10 when imbalance ratio = 0.1/0.2, while the LI of LIMO is the largest, the performances of RW and GS are better. Why?\n* The experiment results can be further explained. For instance, in Figures 5 and 10, no significant correlation is observed once the fraction exceeds 0.25. Additionally, in Table 8, when the imbalance ratio is set to 0.1, ACC and F1-Score remain the same across four baseline methods.\n* It is unclear whether the effectiveness stems more from well-synthesized node features or from the generated edges. Which one is more important? What are the appropriate $\\delta$ values for interpolating node features?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The paper is well-written and easy to understand.\n* The paper focuses on the problem of class imbalance, a critical issue in many real-world applications.\n* Rich experiments have been conducted on a variety of datasets, examining different aspects." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents LIMO, an algorithm designed to tackle class imbalance in graph data. It works by adding synthetic edges between minority class nodes to increase Label Informativeness. It aims to improve the performance of GNNs on imbalanced datasets without significantly increasing the volume of data. The authors provide theoretical analysis and experimental results, showing LIMO outperforms existing methods, especially on heterophilous graphs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The paper lacks a discussion of several recent and relevant baselines for addressing class imbalance in graphs. Notably, recent methods such as GraphENS [1], TAM [2], and GraphSHA [3] are not included in the experimental comparisons.\n* The theoretical proof with t=1.31167628 is derived from simulation, but the paper does not provide a detailed explanation or justification of the simulation settings used. This lack of transparency limits the reliability of this theoretical result.\n* While the paper theoretically claims that LI is directly proportional to GNN accuracy, experimental results do not consistently support this. For instance, in Figure 9, the correlation between LI and accuracy weakens when added_fraction > 0.5, especially at fraction = 0.75, where no strong positive correlation is observed.\n\n[1] Park, J., Song, J., & Yang, E. (2021). Graphens: Neighbor-aware ego network synthesis for class-imbalanced node classification. In International conference on learning representations.\n[2] Song, J., Park, J., & Yang, E. (2022, June). TAM: topology-aware margin loss for class-imbalanced node classification. In International Conference on Machine Learning (pp. 20369-20383). PMLR.\n[3] Li, W. Z., Wang, C. D., Xiong, H., & Lai, J. H. (2023, August). Graphsha: Synthesizing harder samples for class-imbalanced node classification. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (pp. 1328-1340)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1.\tThe calculation of $t$ value given in Theorem 1 is not rigorous, since the authors explore it in a certain range using predefined steps, which might not be the optimal value. The proof of Theorem 2 is also not convincing.\n\n2.\tThe chosen baselines are too old, and some key baselines are not discussed and compared, e.g., GraphENS [1], and GraphSHA [2], etc.\n\n3.\tNo time and space complexity analyses are given to show the efficiency of the proposed model. \n\n4.\tThe authors claim that the proposed model does not significantly increase the data volume. However, the authors synthesize nodes and edges in the graph, which indeed increases the data volume.\n\n5.\tThe caption in Figure 1 does not provide any information, and the authors did not cite references appropriately. For instance, SMOTE was not cited upon its first mention. \n\n6.\tWhat labels are used in the Amazon datasets? The original datasets do not include labels for the user nodes.\n\n\nReferences:\n\n[1] Park J, Song J, Yang E. Graphens: Neighbor-aware ego network synthesis for class-imbalanced node classification[C]//International conference on learning representations. 2021.\n\n[2] Li W Z, Wang C D, Xiong H, et al. Graphsha: Synthesizing harder samples for class-imbalanced node classification[C]//Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2023: 1328-1340." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.\tThe paper is generally well-written and easy to understand, meanwhile, the results appear to be promising.\n\n2.\tExperiments are conducted on both homophilous and heterophilous datasets, which broadens its application. \n\n3.\tVarious ablation studies are given to show the impacts of the synthesized nodes and generated edges." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates class imbalanced graph representation learning, where certain classes have significantly fewer nodes. Ignoring such kind of information will result in biased learning or overfitting. The authors claim that existing oversampling techniques overlook the label informativeness of the graphs, which measures the amount of information regarding to its neighbor’s labels. To this end, the authors propose LIMO to oversample minority class nodes by maximizing LI. Synthetic node features are generated with SMOTE, then edges are added by analyzing the derivative of its LI. Comprehensive experiments on both homophilous and heterophilous datasets demonstrate that the proposed model can outperform recent baselines under different imbalance ratios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThe calculation of $t$ value given in Theorem 1 is not rigorous, since the authors explore it in a certain range using predefined steps, which might not be the optimal value. The proof of Theorem 2 is also not convincing.\n\n2.\tThe chosen baselines are too old, and some key baselines are not discussed and compared, e.g., GraphENS, and GraphSHA, etc.\n\n3.\tNo time and space complexity analyses are given to show the efficiency of the proposed model." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Label Informativeness-based Minority Oversampling in Graphs" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024label,\ntitle={Label Informativeness-based Minority Oversampling in Graphs ({LIMO})},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xEDB5sSIK0},\nnote={under review}\n}" }, "abstract": { "value": "Class imbalance is a pervasive issue in many real-world datasets, particularly in graph-structured data, where certain classes are significantly underrepresented. This imbalance can severely impact the performance of Graph Neural Networks (GNNs), leading to biased learning or over-fitting. The existing oversampling techniques often overlook the intrinsic properties of graphs, such as Label Informativeness (LI), which measures the amount of information a neighbor's label provides about a node's label. To address this, we propose Label Informativeness-based Minority Oversampling (LIMO), a novel algorithm that strategically oversamples minority class nodes by augmenting edges to maximize LI. This technique generates a balanced, synthetic graph that enhances GNN performance without significantly increasing data volume. Our theoretical analysis shows that the effectiveness of GNNs is directly proportional to label informativeness, with mutual information as a mediator. Additionally, we provide insights into how variations in the number of inter-class edges influence the LI by analyzing its derivative. Experimental results on various homophilous and heterophilous benchmark datasets demonstrate the effectiveness of LIMO in improving the performance on node classification for different imbalance ratios, with particularly significant improvements observed in heterophilous graph datasets. Our code is available at \\url{https://anonymous.4open.science/r/limo-1A36/}" }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "class imbalance", "graph neural networks", "mutual information", "label informativeness" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/4d537bb414fef21c92bed3a168d411bcebb78fa6.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Label Informativeness-based Minority Oversampling in Graphs (LIMO)" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xETLME9sNq
SFW sampling for diffusion models via external conditioning
main
Active
diffusion;score-based;safeness;alignment;guidance
alignment, fairness, safety, privacy, and societal considerations
3;3;5;5
4;4;3;4
3;2;2;3
1;2;2;2
2;2;3;3
4
3.75
2.5
1.75
2.5
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- What is the computational overhead of the proposed method compared to standard sampling?\n- Have you explored using other vision-language models besides CLIP for external conditioning?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper presents a interesting approach to address the issue of NSFW content generation in SBMs by utilizing external conditioning signals.\n- The SFW sampler allows for user-defined NSFW classes, making it adaptable to different settings and applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a safe-for-work (SFW) sampling method for diffusion models to prevent the generation of not-safe-for-work (NSFW) content. The key innovation is using external multimodal models (specifically CLIP) as a source of conditioning to guide samples away from undesired regions in the ambient space." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- No ablation studies on different choices of external multimodal models besides CLIP.\n- The proposed method is a direct application of manifold-preserving sampler.\n- The quantitative results in experiments are not convincing. The proposed method does not show better performance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- **Complementing Direct Classification:** Could the authors discuss how their method complements or improves on using a direct classifier to detect NSFW content? Specifically, are there notable trade-offs in generation fidelity when comparing this approach to post-generation filtering?\n\n- **Stable Diffusion Configurations:** Stable Diffusion has various versions and parameter settings that might impact results. Could the authors clarify which versions (and parameter settings) were used in their experiments? A sensitivity analysis would improve replicability." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- **Significance:** The studied problem is very important in practice.\n\n- **Readability:** This paper avoids overly complex writing, which enhances comprehension for a broad audience.\n\n- **Minimal Impact on Image Quality:** The proposed method has a minor impact on the quality of benign samples, maintaining the aesthetic appeal of images that do not require correction." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a method to make diffusion models safer by reducing the likelihood of generating NSFW content. The authors leverage an external signal from CLIP to guide the generation process, steering it away from harmful or explicit images. With the introduction of a Conditional Trajectory Correction step, the model subtly aligns generated content to safe categories without compromising image quality.\n\nThis flexible approach allows users to define what qualifies as harmful content, with customizable categories suited to different contexts. Experiments on Stable Diffusion demonstrate that the method reduces NSFW content with minimal impact on image quality, while a prompt-alignment metric assesses faithfulness to the user’s intent. Overall, this method provides a practical step toward safer generative models aligned with user-defined standards." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Moderate Impact:** The experimental results show a limited reduction in NSFW content. The evaluation metrics are somewhat narrow; incorporating widely adopted metrics like FID or Inception Score could provide a more comprehensive view of how the method impacts visual quality and generation fidelity.\n\n- **Computational Overhead:** The Conditional Trajectory Correction step likely increases computational and time costs due to added gradient estimations at each inference step. The authors do not discuss this in detail, and it would strengthen the paper to include a comparison of inference times with and without this step. Additionally, while the Conditional Diffusion Trajectory Correction adapts the manifold-preserving guidance from He et al., 2024, this adaptation may not represent a substantial advancement over existing methods.\n\n- **Parameter Sensitivity:** The model’s effectiveness depends on tuning parameters like the harmfulness threshold and guidance strength, which may vary by application. This sensitivity could hinder usability, as careful parameter adjustments might be needed for different scenarios. Including guidance or analysis on parameter selection would enhance practicality." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "In general, I'm willing to raise my score, if an area can be explored where the proposed method yields a significant performance improvement. Regarding the aforementioned weaknesses, I pose the following questions:\n\n1. $p_h$ is defined implicitly using trained classifiers. If the probability density $p_h$ of harmful samples is reliant on classifiers, what are the benefits of using your approach, rather than removing harmful descriptions from the prompt? \n\n2. Have you considered using a more elaborate set of concepts $\\mathcal{C}$ for harmful content detection? How would expanding $\\mathcal{C}$ impact the robustness and generalizability of the CTC approach?\n\n3. In addition to question 2: How does the cardinality $|\\mathcal{C}|$ affect the computational cost of the proposed CTC method and how does it affect performance? Can you provide an upper bound on computational cost or some time measurements for inference, especially for increasing $|\\mathcal{C}|$?\n\n4. The results indicate limited improvement over Erasing Stable Diffusion (ESD). Could you elaborate on any advantages of CTC in scenarios or datasets where it may outperform ESD? Were there specific cases in your experiments where CTC provided a distinct advantage?\n\n5. Given that the effectiveness of CTC is influenced by the threshold $\\eta$. The hyper-parameter selection of $\\eta=0.23$ was hand-selected based on a grid search. How sensitive is the performance of your approach to different values $\\eta \\in (0,0.23)$?\n\n6. Could you clarify any potential limitations in terms of scalability or generalization of the proposed CTC method? Specifically, are there specific concepts $\\mathcal{C}$ where the approach struggles, and what could be done to address these limitations?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The authors propose an interesting approach to generating SFW samples with SBMs, leveraging the solver's working principles for the reverse process. Herein lies the potential to extend the method to a broader range of applications beyond content moderation. By directly influencing the sampling trajectory based on explicit probability thresholds, this method could adapt to domain-specific needs where content control is essential, such as ensuring ethical content in SBM-driven artwork or meeting regulatory standards in automated media production. Furthermore, the Conditional Trajectory Correction (CTC) is theoretically grounded." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the challenge of minimizing the generation of undesired, harmful content in Score-Based Models. The distribution of such harmful samples is represented by a probability density $p_h$. \n\nThe authors aim to estimate the conditional probability $p_h(x_0 | x_t, t)$ to identify how likely a sample $x_t$ will produce harmful content at state $x_0$ in the generative process. They propose searching for candidate samples $x_{t-1}$ in the vicinity of $x_t$ that satisfy two criteria: i) they are valid samples, and ii) they yield low density $p_h(x_0 | x_t, t)$.\n\nTo implement this, the authors introduce a Conditional Trajectory Correction (CTC) mechanism. The proposed CTC evaluates the NSFW probability of each clean point prediction and applies a corrective adjustment if this probability exceeds a predetermined threshold $ \\eta > 0 $. This approach aims to effectively reduce the generation of harmful content while maintaining the integrity of valid sample generation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "One of the primary limitations of this paper lies in its reliance on an implicitly defined $p_h$ using trained classifiers. This approach introduces a dependence on the quality and generalizability of these classifiers, which by the authors' admission may not consistently or accurately capture all harmful content, especially if the classifiers have limited scope or biased training data.\n\nAdditionally, the method requires a predefined set of harmful concepts, $\\mathcal{C}$, to guide content filtering. In this study, the largest considered set was $\\mathcal{C}$ = {violence, nudity, NSFW, harmful}. However, this may not be comprehensive enough to capture the full range of potentially harmful content across various contexts, particularly in domains where nuanced or emerging types of harmful content need to be addressed. This limitation in scope could reduce the method’s effectiveness in broader applications.\n\nFinally, the reported results show little to no improvement over Erasing Stable Diffusion (ESD), a competing approach. This raises questions about the practical advantages of the proposed Conditional Trajectory Correction (CTC) method. Given its performance, it remains unclear if the added complexity of CTC justifies its use over established methods like ESD, especially in settings where computational efficiency and ease of implementation are critical considerations." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see above weaknesses" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is easy to follow and self-contained.\n- The experimental design and the proposed method are sound and make sense to me. \n- Sections 4 and 2.2 are quite useful in understanding the generalizability of the method.\n- Aesthetic degradation experiment is important for the work and I commend the authors for doing that." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies an important problem of safe-for-work sampling from diffusion models to avoid generating explicit content. To this end, they propose a novel approach to leverage CLIP-based embeddings of harmful concepts and guide the sampling of diffusion models away from this embedding space. They specifically use manifold preserving guidance and only apply it when the classifier is confident enough (i.e., its probability is above a threshold). The proposed method is then compared against baselines to find competitive performance with baselines in explicit content detection and quality and prompt-concordance of the generated image." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The empirical comparison with traditional guidance-based methods is missing. It has been discussed that this is different from classifier-based guidance but no empirical evidence is provided to show that the proposed method is better. \n- A related technique of negative prompts is not compared or discussed:\n - https://minimaxir.com/2022/11/stable-diffusion-negative-prompt/\n- In almost all cases and especially in Table 1, ESD outperforms the proposed method and this negative result is not adequately discussed. This is strange since it seems that the proposed method is not working. \n- The only positive result is when the prompts are already safe (Table 3), which seems to indicate that the proposed method is beneficial due to its soft ways of conditioning and steering away from certain regions in an embedding space.\n- Novelty of the method is also limited since it uses off-the-shelf classifiers and manifold-preserving guidance approach along with a simple thresholding mechanism on the confidence. \n- In the conclusion, the authors note that explicit conditioning is important to uphold the bias of existing diffusion models. This is interesting but due to the lack of any empirical evidence to support this claim, it is hard to establish that explicit conditioning is indeed the way forward. Thus, if this is a claim that the authors want to establish, some results would be really useful and relevant. \n- Other important related work are missing from experiments and/or discussion.\n - Hong, Seunghoo, Juhun Lee, and Simon S. Woo. \"All but One: Surgical Concept Erasing with Model Preservation in Text-to-Image Diffusion Models.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 19. 2024.\n - Lyu, Mengyao, et al. \"One-dimensional Adapter to Rule Them All: Concepts Diffusion Models and Erasing Applications.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\n - Pham, Minh, et al. \"Circumventing concept erasure methods for text-to-image generative models.\" The Twelfth International Conference on Learning Representations. 2023." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We avoid the generation of NSFW images in diffusion models by modifying the sampling process with an external conditioning source." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024sfw,\ntitle={{SFW} sampling for diffusion models via external conditioning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xETLME9sNq},\nnote={under review}\n}" }, "abstract": { "value": "Score-based generative models (SBM), also known as diffusion models, are the de facto state of the art for image synthesis. Despite their unparalleled performance, SBMs have recently been in the spotlight for being tricked into creating not-safe-for-work (NSFW) content, such as violent images and non-consensual nudity. This article proposes a safe-for-work (SFW) sampler for SBMs implementing a Conditional Trajectory Correction step that guides the samples away from undesired regions in the ambient space using external multimodal models as the source of conditioning. Furthermore, using Contrastive Language Image Pre-training (CLIP), our method admits user-defined NSFW classes, which can vary in different settings. Our experiments on the text-to-image SBM Stable Diffusion validate that the proposed SFW sampler effectively reduces the generation of explicit content, as assessed via independent NSFW detectors. Furthermore, the proposed correction comes at a minor cost in image quality and has an almost null effect on samples that do not need correction. Our study confirms the suitability of the SFW sampler towards aligned SBM models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "diffusion", "score-based", "safeness", "alignment", "guidance" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d2481dd5d5bb5ca364b74728905636868fbe5367.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "SFW sampling for diffusion models via external conditioning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xEZiEhjTeq
Stagewise Development in Transformers and the Geometry of the Loss Landscape
main
Active
Science of deep learning;loss landscape geometry;training dynamics;singular learning theory
other topics in machine learning (i.e., none of the above)
5;5;5;6
4;3;2;2
2;2;3;4
2;3;2;3
3;2;2;3
5.25
2.75
2.75
2.5
2.5
-0.522233
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Can this method be extended to a larger range of tasks such as image processing? Can similar patterns also appear?\n\n2. Do you have theoretical reasons to believe your method would or would not scale to larger models?\n\n3. Could you provide metrics on the rate or extent of collapse across different model sizes or training regimes.\n\nFor other questions, please refer to weakness." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper introduces a novel method Local Learning Coefficient (LLC) to identify the stage boundaries\n2. The evaluation methods proposed in this article have been verified for two different types of tasks providing an enhanced understanding of transformer model training.\n3. Bring a new insight to the model stability and robustness by showing a phenomenon of “layer normalization collapse”." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the emergence in stagewise development of internal structures in the training of transformer models, specifically for tasks like language modeling and in-context linear regression. The authors utilize the Local Learning Coefficient (LLC) to measure geometric degeneracy in the loss landscape and identify distinct training stages of the model’s behavior and internal structures. Beyond general loss decreases in model training, the authors discover several stages in which the geometry becomes more degenerate linking to a phenomenon named “layer normalization collapse”. These findings provide valuable insights into the complex processes of transformer training and underscore the importance of loss landscape geometry in understanding model development." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. This method has certain limitations in model selection. The language model only has 3 million parameters. Perhaps these methods cannot be directly generalized to large models such as GPT and BERT. Therefore, the universality of this model needs to be verified. \n\n2. The LLC method has certain limitations. Because it's estimation will be affected by the training parameters. When the parameters are not the local minimum of the loss, the estimation of LLC might have some bias. Though the article attempts to estimate through the SGLD method, since LLC is sensitive to hyperparameter selection, this instability will affect the credibility of experimental results.\n\n3. The author proposed the concept of \"layer normalization collapse\", but lacked some more in-depth discussions such as the causes and some quantitative analysis. These analyzes will add to the value of this study." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. In Figure 1, would the training process contain more stages (i.e. d(LCC)/dt = 0 points) if you lengthen the training? If there are more stages, what is the corresponding features (behavior and structure) and why do you think the first 5 stages are the most important?\n\n2. What is the mathematical basis for d(LCC)/dt = 0 points? Why are these points critical and able to become the boundary of development stages (from mathematical perspective)? Otherwise, is this a target-guided results (that is, we have discrete stages first and we dig the math features of the boundary)?\n\n3. I notice that this paper did not employ a learning rate scheduler throughout the training process. Though to some degree I acknowledge the setup for controlling variables, learning rate is a very important factor to determine the loss landscape. Many previous works point out that lower learning rate helps models to converge to a local minimum, and decayed learning rate can help models generalize better due to more flattened local minimum. What do you think about the impact of learning rate schedule on your work?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This work finds there are some discrete development stages for transformers via analyzing the geometry of loss landscape. It could bring insights for related works about mechenistic interpretability to transformers.\n\n2. I like the analyses in Stage validation section, which tries to connect LCC trend and some visible important features, though I also have some questions about this section.\n\nOn the whole it is an interesting work and I expect to have more discussion in rebuttal period. I am open to improve the score if my following concerns are well addressed." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper employs \"Local Learning Coefficient\" (LLC), a recently proposed metric, to measure the geometry of loss landscape during training transformers.\n\nBy conducting experiments on two-layer attention-only transformers on some simple tasks, the authors find that the training could be divided into several discrete development stages according the LLC features. And then different behavioral and structural features are found among different stages.\n\nThis work could bring meanings to reveal the learning mechanistic (and decrete development) for transformers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. I am concerned about the theory adopted in this paper, LLC. It seems that the theory of LLC is not widely accepted in the research community and may subject to potential pitfalls and limitations. The lack of peer-reviewed studies on LLC means that we have a limited understanding of its applicability, reliability, and overall validity.\n\n2. In Section 4.3, the authors state that First-layer previous-token heads start to form, and the evidence is that Figure 2(b) top. However, I think it is more like a confuse cause and effect. After the authors discover that two specific heads start to have high previous-token score in LM3 stage, the previous-token heads, 1:2 and 1:5, are then indicated. Whilest, In LM2 stage, there are already many heads that get high scores, why aren't they the previous-token heads? Furthermore, LM3 seems less meaningful compared to other stages.\n\n3. I think this paper might neglect the writing of some specific experimental implementation methods, which needs to be clarified more.\n- LCC is based on the measure of loss. How do you measure loss? Specifically, what dataset or distribution (and how many samples) do you use to measure loss? Would different dataset lead to totally different results? (that is, different loss landscapes for different validation datasets).\n\n[1] Lau, Edmund, Daniel Murfet, and Susan Wei. \"Quantifying degeneracy in singular models via the learning coefficient.\" arXiv preprint arXiv:2308.12108 (2023)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What are the specific advantages of LLC over per-token loss that justify its adoption as a preferred metric for analyzing transformer training dynamics?\n\n2. The LLC computation relies on finding $w*$ (local minimum) through stochastic Langevin dynamics. How can LLC be reliably estimated with respect to weights $w_t$ during the actual training process?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Introduces a new metric (LLC) for analyzing transformer training dynamics and phase transitions, providing an alternative to traditional per-token loss measures\n\n- Presents comprehensive experimental analysis with thorough ablation studies\n\n- Provides detailed methodology and experimental setup documentation" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces the Local Learning Coefficient (LLC) as a novel approach to explain phase transitions in transformer training dynamics. LLC is calculated by measuring the expected difference between the posterior and local optimal solutions. The authors establish connections between LLC and the geometry of model likelihood, offering a new perspective on transformer learning behavior." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The relationship between loss changes and LLC lacks clear description. While Table 1 presents changes in both ICL loss and LLC, it \n does not establish a definitive connection to phase transitions. In Figure 1 the dynamics of ICL loss and LLC do not match or describe the phrase transitions. The paper can benefit from quantitative correlation metrics between ICL and LLC to strengthen the qualitative observations.\n\n- Despite extensive analysis, the paper primarily reinforces existing findings from Olsson et al. (2022) rather than presenting novel insights into transformer learning dynamics" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The same as above." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1.\tThe authors analysis the behavioral and structural changes of each stages to confirm the meaning of LLC, and each stage actually has the valuable behavioral changes to reveal the learning process in the language modeling.\n2.\tThe paper uses the realistic data to train analysis the language modeling of transformer.\n3.\tThe experiment is adequate and reasonable and the paper is well written." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper leverages the LLC as metrics to establish the developmental stages for language modeling of transformer and in-context linear regression. These stages reveal the developmental changes of models in language modeling." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThe used transformers are one and two layer attention-only. Whether the same behavioral and structural changes can be seen in more larger transformer. The scaling may affect the loss landscape." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Transformers learn in discrete developmental stages that can be discovered by studying the local geometry of the loss landscape." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024stagewise,\ntitle={Stagewise Development in Transformers and the Geometry of the Loss Landscape},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xEZiEhjTeq},\nnote={under review}\n}" }, "abstract": { "value": "We show that internal structure emerges in discrete developmental stages during transformer training, for language modeling and in-context linear regression. We introduce a method for detecting boundaries between these stages by probing the degeneracy of the geometry of the population loss. The measure of degeneracy we use is the Local Learning Coefficient (LLC), which is derived from singular learning theory. We establish the validity of the stages revealed by the LLC using a range of behavioral and structural metrics. In a majority of the stages the loss decreases and the geometry becomes less degenerate, which is consistent with an emerging literature on stagewise learning and saddle-to-saddle dynamics in neural network training. However, we also discover several stages in which the geometry becomes more degenerate, and in one example link this to a phenomenon that we call layer normalization collapse. These findings provide new insights into the intricate process of transformer training and underscore the importance of loss landscape geometry in understanding model development." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Science of deep learning", "loss landscape geometry", "training dynamics", "singular learning theory" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/0ddfa6de7e3c87a3d6733f23964d5d42a9302bf7.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/c1eabd465e291ee7968a9df2bd5b09f53b50e27a.zip" }, "title": { "value": "Stagewise Development in Transformers and the Geometry of the Loss Landscape" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xEivccxGEg
CMIMP: Effortlessly Achieving Diverse Population Training for Zero-Shot Coordination
main
Active
reinforcement Learning;zero-shot coordination;population-based training
reinforcement learning
3;3;3;5
3;3;4;4
1;2;2;2
2;2;2;3
1;2;2;2
3.5
3.5
1.75
2.25
1.75
0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1) Is the alternative objective $\\bar{I}$ also an unbiased estimator of $I$ for both possible forms of $F$? If not, what is the bias and how does it affect the optimization?\n2) Is Theorem 1 presented to prove the second property of $\\bar{I}$ (line 216)? If so, please make this connection more explicit in the revision.\n3) Is the source code for the experiments available?\n4) Is the framework robust to various choices of hyperparameters? The paper only reports the final hyperparameters in Appendix B, with no further details about the hyperparameter tuning phase. Which hyperparameter tuning method was used for the current choices in the implementation details? RL algorithms can be sensitive to the choice of hyperparameters, and the CMIMP framework consists of multiple modules such as LSTM, thus, it's important to discuss this in more detail." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1) The paper is addressing an important problem in multi-agent RL. \n2) The two core ideas behind the framework are interesting and valuable to study: sharing parameters across the agents, and regularization based on mutual information. \n3) The authors provide an ablation study to understand the effect of parameter $\\alpha$ on training. Additionally, the paper provides a comparison of different training modes. This highlights both the gains and limitations of the method. \n4) The authors use two different metrics for evaluating the zero-shot coordination of CMIMP and other benchmarks. They mention the weaknesses and strengths of each score and don't solely rely on one evaluation criterion. Overall, the experiment setup for the Hanabi environment is comprehensive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the problem of zero-shot coordination in RL and mentions population-based methods as a common approach to this problem. The authors address the gap that these methods suffer from inefficiency as the population size scales. The main contribution of the paper is a framework called CMIMP that consists of two main components: 1) meta-agent, and 2) mutual information regularizer. \nThe meta-agent has a hierarchical architecture that enables sharing parameters across the agents. The conditional mutual information is optimized as a regularizer to encourage diversity in the population. The paper proposes an alternative objective to the common unbiased estimator of the conditional mutual information and provides practical and theoretical properties of the alternative objective.\nThe framework is evaluated on the Hanabi environment and is compared to other population-based methods based on two different metrics for zero-shot coordination." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Methodology (Section 3)\n\nThe presentation of the problem setup in Section 3.1 requires significant revision for clarity and rigor. The meta-agent can be formally presented as a tuple containing all mentioned modules, with definitions and ranges for functions and variables. Variables $h$ and $c$ in equation 1 are used without prior definition. \nSome of the preliminary terms like the $Q$-function are well-known concepts but it's better to provide a definition using your own notation as they are frequently used (in Sections 3.2 and 3.3) in presenting the core methodology of the framework. \n\nThe paper requires a more thorough explanation of how the meta-agent's hierarchical architecture is reducing the number of trainable parameters. There is only a high-level statement (lines 171-173) in this regard which is not presented well to justify the central claims of the paper.\n\n------------------------\n\nExperiments (Section 4)\n\nAll the results presented in the paper are only based on the Hanabi environment. The current experiment setup is good, but the gain of performance in a single environment can always be misleading. \n\nThe authors mention that the CMIMP framework is compatible with both value-based methods and policy gradient-based methods and mention two possible forms for the function $F$ in Section 3.2. The choice of value-based methods for the Hanabi environment is backed by prior work, but there are no experiments in the main section or appendix for policy-gradient-based methods. The authors should either add experiments for policy-gradient methods such as PPO on a different environment or narrow the framework's scope to focus exclusively on value-based methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Why was mutual information chosen as the diversity metric? How does it compare to other diversity metrics or entropy-based measures in the context of population-based training?\n2. Does CMIMP’s architecture differ from a multi-headed architecture with a mutual information objective? If so, what are the key differences?\n3. Why does training time and memory consumption not significantly increase with population size in Figure 3? If the increases are minor, providing actual numbers in the appendix would add clarity.\n4. What specific set of algorithms or agents was used in calculating the 1ZSC-XP score?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- This paper proposes an architecture that can improve the efficiency of population-based training in ZSC tasks. Furthermore, the authors propose using a mutual information objective to encourage diverse populations of agents to be learned by their method. The combination of these two components in ZSC appears to be novel. \n- Improving the efficiency of population-based training in ZSC is an important problem in MARL." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose Conditional Mutual Information Maximized Population (CMIMP), a method that uses a meta-learned agent (meta-agent) to improve the efficiency of population-based training in zero-shot coordination (ZSC) tasks. Furthermore, CMIMP uses a mutual information objective to encourage diverse populations of agents to be learned by the meta-agent. The authors compare CMIMP to ZSC baselines and other population-based training methods in Hanabi." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The authors suggest similarities between population-based training and meta-RL, yet the connection is vague. While meta-RL focuses on fast adaptation to new tasks, CMIMP uses a multi-headed architecture that is more aligned with shared parameter frameworks than true meta-RL. A more explicit clarification on how CMIMP draws on meta-RL principles, or how it might benefit from these principles, would strengthen the narrative.\n- The experimental section lacks crucial details, including the evaluation protocol, the number of random seeds, and the number of players in Hanabi (especially in Table 3). Additionally, it is unclear which algorithms were used for the 1ZSC-XP score. These omissions make it difficult to assess the results. \n- The experiments are only conducted on a single environment. The authors should consider evaluating CMIMP on a broader set of environments, such as Overcooked. \n- The paper does not discuss CMIMP’s sample efficiency. Evaluating how CMIMP compares to other methods in terms of sample efficiency (not just final convergence performance) would be valuable for understanding its practical viability.\n- Although, E3T [1], a modern method in ZSC is mentioned in the related work, it is not included in the experiments. A comparison with E3T might be relevant to show the effectiveness of CMIMP.\n- The speed and resource consumption results in Figure 3 are surprising. CMIMP is expected to be slower and require more memory with an increase in population size, because of the additional mutual information maximization objective, which requires all observations to be passed through all agent heads. \n- While using mutual information as a diversity metric appears novel in ZSC, it is commonly applied in diversity maximization in MARL (e.g., [2]). Citing these related works would better position CMIMP within the existing literature.\n\n1] Yan, X., Guo, J., Lou, X., Wang, J., Zhang, H. and Du, Y., 2024. An efficient end-to-end training approach for zero-shot human-AI coordination. Advances in Neural Information Processing Systems, 36. \n\n2] Li, C., Wang, T., Wu, C., Zhao, Q., Yang, J. and Zhang, C., 2021. Celebrating diversity in shared multi-agent reinforcement learning. Advances in Neural Information Processing Systems, 34, pp.3991-4002." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- It is not clear to me why maximizing mutual information necessarily corresponds to a diverse training population. The paper states directly that a diverse population corresponds to \"making the meta-agent output distinct actions with different sub-decision modules given an observation and history trajectory\" which is equivalent to maximizing mutual information $I(A;U|H)$ with $A$ represents the output action, $U$ represents the index of sub-decision module, $H$ represents the observation. However, I don't see exactly how this objective necessarily maps to diverse population. Are there any connections of this objective of high mutual information to e.g. more differentiable action distributions?\n- Can you given some explanation to the empirical results on page 9 which concludes that adding $\\overline{I}(A;U|H)$ to the training objective at a moderate level improves ZSC, but assigning it an overly large weight have negative effect? Given it seems that the goal is to maximize diversity of training population to improve ZSC. Or is this observation due to the specific agents used during testing?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Prior work achieves population diversity by requiring agents to output differentiable action distributions, this work uses meta-learning that maximizes conditional mutual information between actions and sub-decision modules conditioned on observation. This explores the connection between meta-learning with the objective of maximizing mutual information and population training. \n- Through constructing a hierarchical structure, the meta-agent trains more efficiently by reducing parameters that need to be optimized while maintaining population diversity (as evaluated by high mutual information): intuitively, task-related parameters (e.g. modules that process observation and keep history memory) can be shared and behavior-related parameters (e.g. decision-making modules) can be individually optimized." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies efficient training paradigms for zero-shot-coordination (ZSC), a framework to improve the generalization performance of cooperative agents by requiring agents to coordinate with unknown agents that were not seen during training. It focuses on the use of population-based training to improve ZSC by coordinating with a population filled with diverse agents with different behaviors. Specifically, it explores the use of meta-learning for population frameworks that reduce computational costs while maintaining population diversity. It proposes a population-based zero-shot coordination framework called CMIMP. Use meta-learning with hierarchical architecture to realize a diverse population for ZSC training by maximizing conditional mutual information between actions and sub-decision modules conditioned on observation. It empirically validates the performance of CMIMP in the Hanabi environment, and conducts ablation studies on how different training modes affect the performance of population training." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper lacks theoretical justification for the update method used for optimizing mutual information. In the main theoretical result, Theorem 1, it shows that one specific update rule specified in equation (6) will increase $I_j$ for some component $j$, but it does not show that the method will converge to the optimal mutual information when the update stops." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- Can the authors elaborate why Eq. 5 is used to approximate the mutual information? How does taking the average lead to the approximation of the mutual information?\n- How did the authors come up with Table 1?\n- Can this method be scaled to >2 players? Would that require any changes in the architecture?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- The architecture of the proposed \"meta-policy\" architecture is well motivated\n- The proposed architecture allows compute-efficient training, while achieving state-of-the-art performance in 2-player Hanabi ZSC" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposes a novel population-based training method that reduces the training cost by learning a shared \"meta-policy\" instead of separate individual policies. The meta-policy is a shared network with different decision heads corresponding to different agents in the population. Furthermore, it proposes a novel training objective for regularizing diversity between agents (heads) via a mutual-information based objective. The combination of these two proposal outperforms state-of-the-art methods in 2-player Hanabi." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I find the paper to be generally hard to follow especially the math equations. Many variables are not defined, e.g., $N$ at line 172, $v$ in Theorem 1. Some variables are used confusingly, e.g. $u_j$ and $u_i$ in Eq.3, $N$ is Table 2. From the text, $i$ and $j$ have totally different meaning (agent index vs number of transition). $i$ and $j$ are being compared in Eq. 5 despite having different meaning.\n- Table 1 is not fact-based nor scientific but a subjective judgement from the authors. I suggest the authors to either remove or justify the information with facts and citation. \n- Eq. 5 needs a lot more motivation and introduction. It is not clear what is the reason we should use this specific form to approximate the mutual information \n- Regarding the approximation in Eq. 5, it is not clear to me at all why taking the mean of action probability or q-values gives the approximation of the mutual information.\n- The instantiation in Eq.8 gives me some understanding of the method, instead of initial the derivation, which should not be the case. Still, I find Eq. 8 to be cryptic. Partly because $i$ and $j$ are being compared despite their difference. But my rough understanding is that the objective drives different agent to have different q-values given the same observation. Being different in this case comes from the fact that the objective tries to reduce the q-values of all other agents, which is also my guess for the notation $i \\neq j$, which apparently seems to well in Hanabi.\n- I do not find the proposed architecture related to Meta-RL.\n- The proposed training objective is supposed to \"guarantee\" diversity, but Eq. 8 optimizes for different Q-values which does not guarantee different behaviors.\n- The paper does not ablate the inclusion of the proposed training objective. With the given architecture, I imagine that using TrajeDiv or MEP diversity constraints should also work well but is not shown in the paper.\n- The paper will benefit from also running the same experiment as in Table 5 but with $\\alpha=0$ to show effectiveness of the proposed method across training setups." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024cmimp,\ntitle={{CMIMP}: Effortlessly Achieving Diverse Population Training for Zero-Shot Coordination},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xEivccxGEg},\nnote={under review}\n}" }, "abstract": { "value": "Zero-shot coordination has recently become a hot topic in reinforcement learning research recently. It focuses on the generalization ability of agents, requiring them to coordinate well with collaborators that are not seen before without any fine-tuning. Population-based training has been proven to provide good zero-shot coordination performance; nevertheless, existing algorithms exhibit inefficiency, as the training cost scales linearly with the population size. To address this issue, this paper proposes the Conditional Mutual Information Maximized Population (CMIMP), an efficient training framework comprising two key components: a meta-agent that efficiently realizes a population by selectively sharing parameters across agents, and a mutual information regularizer that guarantees population diversity. To empirically validate the effectiveness of CMIMP, this paper evaluates it along with representational frameworks in Hanabi and confirms its superiority." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "reinforcement Learning", "zero-shot coordination", "population-based training" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9ef7e5263537c09a106836f96aa778b20ae827ea.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "CMIMP: Effortlessly Achieving Diverse Population Training for Zero-Shot Coordination" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xF5st2HtYP
Adaptive Strategy Evolution for Generating Tailored Jailbreak Prompts against Black-Box Safety-Aligned LLMs
main
Active
Strategy evolution;Black-box jailbreak;Safety-aligned LLM
alignment, fairness, safety, privacy, and societal considerations
3;3;5;6
5;4;4;4
1;1;2;3
1;2;2;3
2;2;2;3
4.25
4.25
1.75
2
2.25
-0.555556
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- The authors propose breaking down jailbreak strategies into Role, Content Support, Context, and Communication Skills, claiming these align with persuasive human behaviors. How can we be sure these components genuinely improve jailbreak efficacy, given the lack of empirical justification? Could there be other, more effective component structures, or are these simply intuitive choices without systematic validation?\n\n- If the initial population is not well-chosen, could ASE converge to suboptimal solutions, reducing its effectiveness?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1.\tAdapts insights from evolutionary algorithms to LLM jailbreaking, breaking down the pipeline into four distinct components.\n\n2.\tPositive experimental results: Experimental outcomes show that ASE achieves higher JSRs with fewer queries compared to existing methods across several models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "They present a novel approach to jailbreaking LLMs in black-box settings by leveraging evolutionary algorithms. The ASE framework divides jailbreak strategies into four modular components—Role, Content Support, Context, and Communication Skills—expanding the possible strategy combinations and enabling dynamic adaptation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I appreciate the authors’ efforts but would like to point out the following:\n\n1.\tThe design in **Component-Level Strategy Space** seems artificial and arbitrary. The authors argue that “these components perfectly align with how humans are more easily persuaded,” which is a strong statement lacking evidence. In designing red-teaming algorithms for LLMs, significantly more investigation is needed to justify such a design beyond human-like analogies and heuristics, which do not align with academic rigor.\n\n2.\tRegarding the **Fitness Evaluation**, the design also relies on heuristics that make the algorithm feel less reliable, particularly with features like “keyword matching.” What advantages does this offer compared to a reward model that labels the answer’s harmfulness?\n\n3.\tIn terms of the **experiments**, I believe the focus is misplaced. The authors make strong claims about “expanding the search space and allowing for precise, efficient strategy refinement,” yet I could not find any specific evidence to support this. The paper's algorithm, compared with other works, centers on adaptive evolution, but only JSR is shown. What about the structure and behavior variations within the population? I could not find compelling evidence to justify the complex design of such an evolution algorithm, which seems to introduce a high barrier to practical use and deployment.\n\n4. **Scalability and computational cost** are concerned, given the computationally intensive nature of population-based algorithms." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. See weakness. How does the computational cost of ASE compare to existing jailbreak methods, particularly in large-scale testing scenarios? If you have enough to rebuttal, please compare ASE's scalability to that of baseline methods as the dataset size increases.\n2. I wonder whether you have tried to propose a defense framework against ASE? How existing defense mechanisms might perform against ASE?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper is well-written and easy to understand.\n2. As a black-box jailbreak method, the ASE framework is well-performed.\n3. The method is simple and effective." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the limitations of current jailbreak methods for safety-aligned Large Language Models (LLMs), particularly in black-box scenarios where models are resistant to prompt manipulation for harmful outputs. The authors argue that existing methods often either depend too heavily on red-teaming LLMs (pushing their reasoning capacities) or rely on rigid, manually predefined strategies, which limits their adaptability and effectiveness. To address this, they introduce the Adaptive Strategy Evolution (ASE) framework, which decomposes jailbreak strategies into modular components and optimizes these through a genetic algorithm. This approach provides a structured, deterministic path to refining jailbreak methods, along with a new scoring system for feedback, resulting in higher jailbreak success rates (JSR) compared to existing approaches." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. I concern on the complexity and resource requirements. ASE’s reliance on genetic algorithms and modular strategy decomposition could be computationally intensive, which might limit its practical applicability for smaller research teams or resource-constrained environments. Is it possible for you to provide runtime, or memory usage, or hardware specifications used for the experiments. Additionally, suggesting a comparison of these requirements to existing methods.\n2. No other obvious weakness." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- The decomposition of jailbreak strategies into different independent modules appears novel.\n- The evaluation is thorough." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a new jailbreak method for large language models, namely ASE. It involves three key steps. First, it decomposes jailbreak strategies into four modules, each of which can be implemented differently. This process constitutes a large search space. ASE then utilizes a genetic algorithm to perform optimization in this space, with a newly designed fitness function. The authors demonstrated the effectiveness of ASE on two tasks, in which it exhibited superior performance compared to previous methods when attacking closed-source models like GPT-4." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- After reading the manuscript, I find this work to resemble more of an engineering design. For example, the authors did not introduce in detail how each component in the search space can be implemented. While there is a table in the appendix describing this, I wonder what protocols or criteria were employed to design this space. Without this information, it is hard to be convinced that rigorous efforts were involved in this design; it seems more like a demo that could be generated by simply prompting an LLM. Also, while the authors regarded their fitness function as a contribution, the four-point scoring with an additional term is a basic engineering trick—all that’s needed is to replace the evaluation prompt for the LLM with a more fine-grained one, which could itself be generated by an LLM. Finally, the core of the proposed method is a basic GA, which may be too straightforward for a prestigious conference like ICLR. More importantly, the authors did not address questions like why they specifically chose GA over other large-scale optimization methods in the literature. Additionally, could the original GA be directly employed for this task, or is this the optimal design choice? There are many similar questions, but unfortunately, the authors did not discuss these matters in their manuscript.\n\n- Building on the previous point, a major concern with using GA in jailbreaking is the number of large number queries it would require. Unlike traditional adversarial attacks, querying LLMs can be very expensive, so the total query budget should be an important consideration in jailbreaking. Although the authors discussed this as a hyperparameter, it is unclear how ASE would perform if given the same budget as previous approaches. Currently, the authors simply treat this aspect as a limitation, but this could actually be a critical consideration for practitioners that can directly influence their choice of methods. This concern further exacerbates when considering the fact that the proposed method does not perform significantly better compared to existing approaches on open-source models, which could be cheaper for employing large-scale query.\n\n- The manuscript is not well-written. For example, there are numerous grammar issues and inappropriate expressions, which strongly affect the readability of the introduction section (for example, many simple processes and concepts introduced in the methods section are made very ambiguous and overly complex, requiring the reader to see the methods section to understand the motivations in the introduction). This can be disappointing, as the introduction is supposed to provide a clear and intuitive picture of the whole manuscript from the outset.\n\nIn addition, I list some of the presentation issues.\n- Line 074: The reference to Wei et al., 2024 is generated with improper commands, leading to awkward additional brackets. The authors should consult ICLR’s official guide for LaTeX templates.\n- Line 078: “Here, red-teaming LLMs act like drafting from structured outlines….” There are grammar issues with this sentence, and I could only gauge its meaning after reading it several times.\n- Line 085: The “considering” part has similar issues.\n- Line 137: The comma after “TAP” is redundant." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. As mentioned “beyond a population size of 15 and iteration count of 5, the gains in JSR become marginal.” This is an indeed small settings, therefore I am curious if the genetic algorithm is overqualified here. Maybe some simpler optimization technique can suffice, such as local search?\n2. How about your search space regarding the strategy optimized by the genetic algorithm? If the search space is small then there is no need to use complex optimization techniques." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. ASE breaks down jailbreak strategies into modular components, significantly increasing both the flexibility and scope of the strategy space.\n2. This work presents an enhanced fitness evaluation system that overcomes the drawbacks of traditional binary or overly complex scoring methods, providing accurate and reliable assessment of jailbreak prompts." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work tackles jailbreak attacks on safety-aligned LLMs, which remain vulnerable to prompt manipulation. The proposed Adaptive Strategy Evolution (ASE) framework improves jailbreak techniques by modularizing strategies and using a genetic algorithm for systematic optimization, replacing uncertain LLM-based adjustments. ASE also introduces precise feedback, enabling targeted refinements. Results show ASE outperforms existing methods against advanced models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Some key parameters are missing. For example, how about the crossover and mutation rate in the experiment.\n2. As evolutionary algorithms are inherently with randomness, it should be tested for multiple time and show the statistical results (average and std). However, I didn’t find any related result, the missing of which will lower the soundness of this work.\n3. It states that this is “adaptive” while I didn’t find any “adaptive” design which refers to the strategy will update adaptively as the optimization proceeds. It seems just a normal optimization framework. Can you elaborate on how the ASE framework adapts during the optimization process? Are there any specific mechanisms that adjust the strategy or algorithm parameters based on intermediate results?\n4. In line 323, “algorithm 1” should be “Algorithm 1”." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propsoe a novel black-box jailbreak method, which have successfeully jailbreaked Llama3, GPT-4o, Claude-3.5 and even o1." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024adaptive,\ntitle={Adaptive Strategy Evolution for Generating Tailored Jailbreak Prompts against Black-Box Safety-Aligned {LLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xF5st2HtYP},\nnote={under review}\n}" }, "abstract": { "value": "While safety-aligned Large Language Models (LLMs) have been secured by extensive alignment with human feedback, they remain vulnerable to jailbreak attacks that exploit prompt manipulation to generate harmful outputs. Investigating these jailbreak methods, particularly in black-box scenarios, allows us to explore the inherent limitations of such LLMs and provides insights into possible improvements. However, existing black-box jailbreak methods either overly rely on red-teaming LLMs to execute sophisticated reasoning tasks, such as diagnosing failure cases, determining improvement directions, and rewriting prompts, which pushes them beyond their inherent capabilities and introduces uncertainty and inefficiency into the refinement process, or they are confined to rigid, manually predefined strategy spaces, limiting their performance ceiling. To enable a sustained and deterministic exploration with clear directional guidance, we propose the novel Adaptive Strategy Evolution (ASE) framework. Specifically, ASE innovatively decomposes jailbreak strategies into modular key components, dramatically enhancing both the flexibility and expansiveness of the strategy space. This also allows us to shift focus from directly optimizing prompts to optimizing the jailbreak strategies. Then, by leveraging a genetic algorithm (GA) for strategy components' selection and mutation, ASE could replace the uncertainties of LLM-based self-adjustment with a more systematic and deterministic optimization process. Additionally, we have also designed a new fitness evaluation, that emphasizes the independence of scoring criteria, provides highly accurate and reliable feedback, enabling precise and targeted refinement of jailbreak strategies. Experimental results further demonstrate that ASE achieves superior jailbreak success rates (JSR) compared to existing state-of-the-art methods, especially against the most advanced safety-aligned LLMs like GPT-4o, Claude-3.5, and even o1." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Strategy evolution", "Black-box jailbreak", "Safety-aligned LLM" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/69802c93d6084c08046ec67a9ec257ba226ed8fb.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/2a520199e161ff65fc95e0bd9dc8b85396b959bf.pdf" }, "title": { "value": "Adaptive Strategy Evolution for Generating Tailored Jailbreak Prompts against Black-Box Safety-Aligned LLMs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xFezgECSLa
On the Design and Analysis of LLM-Based Algorithms
main
Active
Large Language Models;Compound AI Systems;Algorithm Design and Analysis;Analytical Framework
other topics in machine learning (i.e., none of the above)
3;3;3;3
3;4;2;4
3;3;2;2
1;2;1;2
2;2;3;2
3
3.25
2.5
1.5
2.25
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please clarify if all of the defined costs are just qualitative. \n\nIf they are not, how does one define a metric to assign values and to update the values?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The stated objectives of the paper are excellent if they can be achieved." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper is about a formal investigation into the design and analysis of LLM-based algorithms, i.e. algorithms that contain one or multiple calls of large language models (LLMs) as sub-routines and critically rely on the capabilities of LLMs" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper does not seem to deliver very much of the stated contributions. Proposition 1 is the key result, and it is quite weak.\n\nThe real issue is that a computation graph is a precise mathematical object that uniquely defines a computation, whereas this LLM-based computation graph is neither precise nor does it uniquely define a computation. The lack of uniqueness stems from the non-determinism of any LLM-based call.\nAs a consequence, how can one compare a \"normal\" computation graph with this LLM-based computation graph?\nFurther, the analysis should be stochastic (with expected complexity), rather than the proposed deterministic complexity. An LLM is inherently stochastic, so one cannot specify the outcome of an LLM call as a deterministic object.\n\nThe paper states: use “accuracy” to refer to the broader concept of “quality”, and an “error metric” can be any metric that measures how much the output of an algorithm deviates from certain criteria.\"\n\nWhere do the costs come from in C(prefilling), C(decoding)?\n\nIt seems like all of these costs are just qualitative.\n\nhypothetically categorize LLMs into two types: Type-1 LLMs are only prone to the first failure mode, while Type-2 LLMs are prone to both\n\nToo much speculation in this article\n\nIt seems like the article builds up to Proposition 1,and then we ask \"so what!\". This is a pretty weak conclusion, and it does not even look like to has much strength as a formal mathematical expression." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "What part of your analysis is applicable exclusively to LLMs rather than to any parallel algorithm using external resource?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper advocated analysis of LLM-based algorithms." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper attempts to formalize cost and accuracy analysis of LLM-based algorithms ." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The theoretical part proposes a framework for analysis which looks like a simplistic variant of analysis of parallel algorithms, something one learns during undergrad CS studies. The 'empirical' evaluation in the body of the paper is just a qualitative description of application of the proposed (rather standard) methodology to a few problems. There is 'numerical evaluation' in the appendix, which is not convincing and lacks detail.\n\nWhile I welcome the idea of systematic analysis of algorithms, including LLM-based ones, the paper lacks both theoretical novelty and empirical justification. Significant effort has to be spent to bring this paper to the level of a publication at a major conference such as ICLR." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper tackles an interesting and important problem." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a framework to formally investigate LLM-based algorithms,\nallowing to assess accuracy and efficiency. The authors describe their\nframework and instantiate a few different LLM-based algorithms, which they then\nanalyze." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "First, the paper is far too long for a conference format (the appendix is twice\nas long as the main paper and includes sections that should be part of the main\npaper, like related work). This paper would be more suitable as a journal paper.\n\nWith regards to the proposed evaluation framework, very little of it seems to be\nspecific to LLMs, or rather the LLM-specific parts are provided by the\ninvestigator. It seems that this would be difficult in practice -- how would I\ncharacterize the capabilities of any given LLM in a way that allows to determine\nwhat the output would be for a given prompt?\n\nThe insights the analyses in the paper provide are very generic: \"the optimal\nvalue of m that minimizes costs might depend on the choices of cost metrics and\nassumptions of LLM inference service, among other factors\", \"the minimum error\nof the overall algorithm might be obtained by some intermediate value of m that\nachieves a balance between these two failure modes\", \"each option of retrieval\nhas its own pros and cons\". None of these are actionable, and it is unclear that\nthe proposed framework is necessary to obtain them. It is unclear whether other\ninsights are generally true, in particular \"since a smaller value of m makes\neach sub-task easier, it is reasonable to expect that the overall error E(y)\nwith this metric will also become smaller as m decreases\" -- while the\nindividual errors might decrease, combining multiple steps potentially compounds\nindividual errors, resulting in an overall increase in error.\n\nOther parts of the proposed framework are unclear. Section 3.3 describes the\nanswer being generated by majority voting, but the correct answer appears only\nonce. Dividing the input text into chunks, it seems that there would be a lot of\nincorrect and \"don't know\" answers and, hopefully, a single correct one (for the\nchunk that did contain the answer). How can majority voting possibly return the\ncorrect result in this case?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "I have no concerns." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please list the non-trivial takeaways stemming from your analysis. How does it influence the design of LLM-guided algorithms? What does it explain in those we already have? How does it contribute to the field? Please be precise." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "I like the proposed idea of analyzing the LLM-based algorithms as computational graphs. It's definitely the most natural way, similarly to analyzing standard algorithms and data flow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The aim of the paper is to analyze the behavior of LLM-based algorithms, specifically their theoretical \"accuracy\" and cost. The authors present LLM-based algorithms as computational graphs consisting of LLM queries and non-LLM computations that capture the data flow and allow for analysis via dynamic aggregation. The proposed framework is described in Section 2. Then, Sections 3.0-3.1 provide an abstract analysis of a map-reduce pattern, which is followed by special examples of counting and retrieval. In Section 4, a hierarchical decomposition pattern is analyzed on an example of determining a value of a variable. Finally, Section 5 analyses an example composed of recursive queries." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main problem of that paper is that, in my opinion, there is no takeaway from it. The proposed analysis mostly ends on framing the selected patterns as computational graphs, showing a basic bound, sometimes followed by a very generic statements like \"so there is a tradeoff, period\". I do think that we should have a principled way of analyzing LLM-based algorithms and the proposed framework looks promising, but little was done to use it. To be more precise:\n\n- In Section 3 you analyze the map-reduce pattern, and in my opinion the most interesting things happen here.\n - You derive (or rather state since it is straightforward) the bound on the cost (Equation 4). Then, you use it to show (actually state) that it is minimized at $m=min(n, \\bar m)$. Yes, I agree that if the cost is linear or sub-linear with respect to the input, the best idea is to use as large chunks as possible (again, rather a straightforward statement).\n - Then, you derive a bound on the cost with quadratic complexity and find a minimizer for that, which is approximately $L_{sys}$. This is actually interesting and surely non-trivial, but sadly no further discussion is provided.\n - Then, you analyze the parallel setting. Although the analysis is sound, I was a bit disappointed when I understood the key takeaway: \"when m is very big, the cost increases if we make it even bigger; when m is very small, the costs increases if make it even smaller\". Unfortunately, I find it straightforward -- there is always a tradeoff in the size of distributed computation parts and using too small or too big is never a good idea.\n - Then, you state that the cost is minimized by $m\\asymp n/p$. That would be interesting, but I cannot find any justification for that statement.\n - Also, the asymptotic notation you use is a bit hard to parse. The only variable that has unbounded support is n, which should make other variables either constants or implicit functions of n, which is not specified directly. It looks like you take asymptotic of m, although it's bounded by n.\n - The \"implication\" in line 323 literally states that \"The optimal value of m might depend on various factors, period\".\n - Section 3.2 applies the derived formulas to a counting example. However, (1) the cost analysis is little beyond simply rewriting the formulas, (2) the takeaway is again straightforward (overall counting error is smaller if chunks are smaller)\n - Section 3.3 describes clearly the needle-in-a-haystack example. But then, equally clearly states the conclusion \"the optimal value of m is a tradeoff, it can't be too big or too small period\". Please, explain me why any kind of analysis is needed to make such a claim?\n\n- In Section 4 you decribe the hierarchical decomposition pattern. Now, there are two listed conclusions: (1) reasoning has much lower cost than retrieval, (2) making the algorithm sequential or parallel has pros and cons. Although I agree with both, at the same time I don't see how did your framework help you derive those. They were not conclusions, you stated both during analysis and that's fine, since both need no explanation. But then, again, you haven't shown benefits of using your framework.\n\n- In Section 5 you analyze recursive decomposition and state a bound on the error (and that's literally all). The question is: why do we need that bound? Is it helpful in any way? Why there is no discussion?\n\n- As detailed, although I like the high-level idea of the framework, the paper shows no significant usage of it. Having said that, I'd be happy to be proven wrong, so I'm open to the discussion.\n\nOther comments:\n\n- I suggest adding a brief example of any LLM-based algorithm in the beginning. Initially I thought more about algorithms like Dijkstra, so specifying that with a simple one-sentence example would be helpful.\n- It takes 4 pages until you start any analysis. I suggest making the initial descriptions (in fact, everything) much more concise, since the framework you propose is rather simple (which is a benefit), but then reading 4 pages of \"how to decompose an algorithm into a computation graph\" sounds much too lengthy.\n- It's a bit confusing that you name the paragraphs the same way in sections 2.3 and 3.1. If you repeat the same names in Section 2 (which is introducing the framework), then in 3.1 it sounds like an unintended repetition at first glance.\n- Figure 13 is actually very helpful in understanding the description. Please move it to the main text, even one of them, even smaller.\n- The paper is missing a discussion of limitations and related works." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024on,\ntitle={On the Design and Analysis of {LLM}-Based Algorithms},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xFezgECSLa},\nnote={under review}\n}" }, "abstract": { "value": "We initiate a formal investigation into the design and analysis of LLM-based algorithms, i.e. algorithms that contain one or multiple calls of large language models (LLMs) as sub-routines and critically rely on the capabilities of LLMs. While LLM-based algorithms, ranging from combinations of basic LLM calls to complicated LLM-powered agent systems and compound AI systems, have achieved remarkable empirical success, the design and optimization of them have oftentimes relied on heuristics and trial-and-errors, which is largely due to a lack of formal and analytical study for these algorithms. To fill this gap, we start by identifying the computational-graph representation, \ntask decomposition as the design principle, and some key abstractions, which then facilitate our formal analysis for the accuracy and efficiency of LLM-based algorithms, despite the black-box nature of LLMs. Through extensive analytical and empirical investigation in a series of case studies, we demonstrate that the proposed framework is broadly applicable to a wide range of scenarios and diverse patterns of LLM-based algorithms, such as parallel, hierarchical and recursive task decomposition. Our proposed framework holds promise for advancing LLM-based algorithms, by revealing the reasons behind curious empirical phenomena, guiding the choices of hyperparameters, predicting the empirical performance of algorithms, and inspiring new algorithm design. To promote further study, we include our source code in the supplementary materials." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Models", "Compound AI Systems", "Algorithm Design and Analysis", "Analytical Framework" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/8b12ee14d2ebf90d3d8693d4494fbbc1d90d167e.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/b34c8de1fee79248597059a446a53b7a68f1063d.zip" }, "title": { "value": "On the Design and Analysis of LLM-Based Algorithms" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xFvHcgj1fO
OML-AD: Online Machine Learning for Anomaly Detection in Time Series Data
main
Active
Online Machine Learning;Anomaly Detection;Time Series;Concept Drift
learning on time series and dynamical systems
3;3;3;3
4;4;4;4
1;1;1;2
2;1;1;1
1;2;2;2
3
4
1.25
1.25
1.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "W1. Lack of technical depth\n\nThe paper presents a solution that is mainly a study of existing solutions combined for this task. Therefore, the technical depth is somewhat low (even though the combination of such ideas in general might be novel)\n\nW2. Missing baselines\n\nOverall, the coverage of online learning is good. Unfortunately, the work is missing specialized solutions for the time-series anomaly detection task. For example [a] solves a very similar problem so it should be used, along with other methods in [a], as baselines.\n\n[a] \"SAND: streaming subsequence anomaly detection.\" Proceedings of the VLDB Endowment 14.10 (2021): 1717-1729.\n\nW3. Missing progress in the area for benchmarks\n\nSimilarly, there has been tremendous progress in benchmarking anomaly detectors [b]\n\n[b] \"TSB-UAD: an end-to-end benchmark suite for univariate time-series anomaly detection.\" Proceedings of the VLDB Endowment 15.8 (2022): 1697-1711.\n\nW4. Missing progress in the area for evaluating anomaly detectors\n\nAs with previous comments, the work is missing progress towards evaluating methods in this area [c]\n\n[c] \"Volume under the surface: a new accuracy evaluation measure for time-series anomaly detection.\" Proceedings of the VLDB Endowment 15.11 (2022): 2774-2787." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "S1. Online anomaly detection is critical for the massive amounts of data, streams, and edge devices.\nS2. Good coverage of methods in online learning broadly\nS3. Results support the overall claim" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents OML-AD, an online anomaly detection method for non-stationary time-series data. The method combines existing ideas scatter in literature for different tasks and demonstrate the potential of the solution on several datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1. Lack of technical depth\nW2. Missing baselines\nW3. Missing progress in the area for benchmarks\nW4. Missing progress in the area for evaluating anomaly detectors" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Q1. I would like to inquire whether the MAE and MSE performance of OML-AD is solely dependent on the models used within it. From my understanding, OML-AD does not incorporate specific measures to impove MAE and MSE performance. Therefore, I believe that to effectively demonstrate the performance of OML-AD, it should be compared with models having similar MAE and MSE performance.\n\nQ2. In the 4 METHODOLOGY section of the article, three methods for calculating threshold are mentioned: simple choice using μt and σt, based on the common assumption of residuals' normality and using extreme value theory to find a more conservative threshold. However, only the first method was used in the experiments. I think corresponding experiments should be added to showcase their application scenarios.\n\nQ3. To showcase the effectiveness of OML-AD, it would be beneficial to include comparisons with several recent state-of-the-art anomaly detection models, rather than relying solely on SARIMA and Prophet, which were originally designed for forecasting but have been repurposed for anomaly detection.\n\nQ4. While the synthetic dataset in Figure 2 shows some degree of concept drift, I believe that recent state-of-the-art anomaly detection models can handle such scenarios well. Could the authors provide examples of situations that are particularly challenging for other models but can be effectively addressed by OML-AD?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "S1. This paper introduces a lightweight plugin that effectively transforms a basic online forecasting model into an online anomaly detection model. This innovative approach enhances the adaptability of existing methods that are trained on historical data.\n\nS2. The writing in the paper is generally clear and well-structured, effectively communicating the design of the OML-AD framework.\n\nS3. This paper addresses a critical issue in anomaly detection: the inability of models trained on historical data to adapt to changes in future data. By tackling this problem, the research holds practical significance for real-world applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper mainly proposes OML-AD (Online Machine Learning for Anomaly Detection), which is a flexible framework that can use different online forecasting models as base models to learn normal behavior. It then determines whether an observation is an anomaly by comparing the prediction errors with a threshold, which is dynamically changing. This approach addresses the issue of models trained on historical data becoming outdated and their inability to adapt to changes in the data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1. I think the overall approach has too little workload and lacks innovation, as it simply uses the prediction error between the predicted values of an existing online forecasting model and the actual values, with a threshold to determine if it’s an anomaly. Even the method for updating μt and σt, which are key to calculating the threshold, is based on previous studies.\n\nW2. The experiments conducted in the paper primarily utilize a narrow range of datasets, including one synthetic dataset and one real-time dataset with added synthetic anomalies. The authors should consider adding more real-world datasets, particularly those with varying degrees of concept drift and different types of anomalies. \n\nW3. While the paper compares the proposed approach to a few existing methods, the selection of baseline models appears limited. For a more robust evaluation, the authors should include a wider variety of state-of-the-art anomaly detection techniques." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Additional experiments could be conducted to further validate the results." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The proposed method is capable of handling time series data with concept drift." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes an online anomaly detection method capable of handling non-stationary time series data through predictive approaches. A limited number of experiments were conducted to verify the effectiveness of the method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The issues highlighted in the paper are not novel, as time series prediction is a well-researched problem. Building upon this foundation, only minor modifications to models are required to perform prediction-based time series anomaly detection, contrary to the paper's claim that \"there currently is little effort exploring online learning for prediction-based anomaly detection.\"\n\nThe experimentation is insufficient:\n1. The comparative algorithms used are from 2018 and only two such algorithms are considered.\n2. There is a lack of crucial experiments, such as ablation studies." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "There are too many questions to ask. To address them, the paper has to be rewritten completely." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. The method is integrated into the widely-used library River.\n2. The proposed method is simple." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces an online learning method for prediction-based anomaly detection. The proposed method can better handle the data drift in an online setting than batch-trained models. \n\nThis paper has bad writing, and the contribution is limited." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The writing is horrible. The whole paper needs to be rewritten completely. The introduction, related work, and preliminary seem to be redundant, and the key points cannot be reflected clearly. \n\n2. The contribution is limited. It is hard to find the contribution. It is hard to understand how the proposed method is relevant to online learning in the main content. \n\n3. The experiment is weak. Why not use KDD cup datasets? There are 250 time series in the KDD cup. Also, a robust method should be flexible to handle different data drift, including no drift, slow drift, fast drift, and random drift, as the non-stationarity in the real world is diverse in different applications." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "OML-AD is a novel online learning approach for real-time anomaly detection in non-stationary time series data, surpassing traditional methods in accuracy and computational efficiency, with implementation in the River Python library." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024omlad,\ntitle={{OML}-{AD}: Online Machine Learning for Anomaly Detection in Time Series Data},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xFvHcgj1fO},\nnote={under review}\n}" }, "abstract": { "value": "Time series are ubiquitous and occur naturally in a variety of applications -- from data recorded by sensors in manufacturing processes, over financial data streams to climate data. Different tasks arise, such as regression, classification or segmentation of the time series. However, to reliably solve these challenges, it is important to filter out abnormal observations that deviate from the usual behavior of the time series. While many anomaly detection methods exist for independent data and stationary time series, these methods are not applicable to non-stationary time series. To allow for non-stationarity in the data, while simultaneously detecting anomalies, we propose OML-AD, a novel approach for anomaly detection (AD) based on online machine learning (OML). We provide an implementation of OML-AD within the Python library River and show that it outperforms state-of-the-art baseline methods in terms of accuracy and computational efficiency." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Online Machine Learning", "Anomaly Detection", "Time Series", "Concept Drift" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/50ce65c9c75f429f947bdc045e4f0a9fe56ab911.pdf" }, "presentation": null, "primary_area": { "value": "learning on time series and dynamical systems" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/205f3db7af23593cdc78314b8d864c142c6461cc.zip" }, "title": { "value": "OML-AD: Online Machine Learning for Anomaly Detection in Time Series Data" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xGM5shdGJD
A Hitchhiker's Guide to Scaling Law Estimation
main
Active
Scaling Laws;llms;language;pretraining;open;metascience;efficient;evaluation
foundation or frontier models, including LLMs
3;3;6;6;8
4;4;3;4;1
2;3;3;3;3
2;2;3;3;3
3;2;3;3;2
5.2
3.2
2.8
2.6
2.6
-0.813682
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- In terms of the equation on line 164, the paper uses squared error as the loss. But Hoffmann, 2022 (Chinchilla) uses Huber loss to make the objective more robust to outliers. Can the authors explain why they don’t consider outlier-robust losses?\n- The authors mention the intermediate losses could have spikes or increase, making them less useful as regression target. I wonder if the authors have considered applying outlier removal/smoothing of the losses before the scaling law fitting." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper organizes and releases a dataset covering existing model’s losses and other information for the study of the scaling laws.\n- The paper has taken a detailed approach to understand the relationship between different model families’ fitted scaling laws’ similarities, how to estimate the scaling law for a new model using limited amount of training runs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on understanding good practices for fitting scaling laws for language models. The paper first collects and releases a large-scale dataset containing losses and downstream evaluations for published pretrained models. Then it uses this dataset to estimate scaling laws and understand the nuance in how to estimate the scaling law." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weaknesses of this paper, from my point of view, are its practical usefulness and its presentation.\n\n- **[Practical usefulness]** Scaling laws are most useful when the target model whose performance to be predicted require much larger compute than the set of smaller models (with varying tokens, parameters, compute) that can be feasibly explored. This paper performs eval only on the largest models within the model family. So it’s not clear to me how big are the set of the second largest models used for scaling law fitting. In contrast, for Llama 3’s scaling law experiments, their small-scale training runs have compute budget up to $10^{22}$ FLOPS but they use the fitted model to guide the allocation of parameter and token to train the 405B model under $3.8 \\times 10^{25}$ FLOPS of compute budget.\n - Besides, although I find the problems explored by the authors on how to build on existing scaling laws and use only limited amount of new model runs to fit scaling laws to be interesting, I’m not sure whether such approaches would be seriously considered by practitioners who want to use scaling laws to train large models. As the authors have shown, over certain regimes, not doing the scaling law fitting by generating a lot of small scale model training runs could be less accurate. I would appreciate the authors’ comments on when they think the approaches explored in Section 5 and 6 would be practically useful.\n- **[Presentation could be further improved]**\n - **[Notation]** I find some of the notation used in the paper can be changed to improve readability. For example, $\\max_{\\\\# \\textrm{params}} (F)$ is used to represent the set of functions $f$ with maximum number of parameters. I find this notation to be a bit unintuitive. The same is true for $\\max_{\\\\#\\text{toks}}(F, q)$.\n - In addition, on line 317, page 6, I think $\\hat{L}(\\cdot \\mid F_{\\textrm{train}})$ is meant to mean the **loss** of the largest-compute model, but the current notation defines this to be the **function** that uses the largest compute.\n - **[Figures]** In terms of figures, I think the table representation is a bit difficult to interpret, especially the color bar’s choice as well as the isoFLOP contours. Besides, Figure 6 and 7 in the Appendix have overlapping numbers, which make them difficult to read.\n - **[Codebase]** Because the authors mention the database is one of their contributions, the current codebase is a bit poorly documented, which makes me think they are unready to be used as a public benchmark for follow-up works.\n - **[Organization]** Some discussion of Figure 1 (on page 3) is later on discussed on page 9. I think this could potentially be reorganized to improve the flow of the paper.\n- Minor:\n - I think it is incorrect to say the intercept $E$ in the scaling law is the “baseline”. Instead it’s actually the best loss this fitted model class can do given infinite compute and parameters.\n - There is a redundant “future work” on line 436." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 1 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weaknesses section." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The authors build and release a dataset that is useful for the reproduction and further analysis of the results. \n2. The authors provide a systematic empirical study to the scaling laws. The empirical results in this paper lead to a vast number of implications on scaling law estimation. Some of them are novel to me." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the estimation of scaling law of large language models. The authors build a dataset containing losses and evaluation of pretrained models, and run estimation of scaling laws on this dataset. The authors arrive at some interesting findings that covers different aspects of scaling law estimation, e.g. the reliability of extrapolation, the use of partially trained model, the difference choices of model sizes." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I am not en expert in this field, and the paper looks overall good to me. My major concern is that readability can be improved. Many of the results and conclusions are not clearly explained, and require experiment results that are scattered across the paper and appendix, e.g., section 5.1. I would encourage the authors to reorganize the experiments and conclusions part of this paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "My comments/questions are:\n\n1) L25: 'differ scaling behavior' - missing word\n2) Figure 2 appears before Figure 1\n3) L87-88: \"A scaling law estimates the loss of a costly model by training cheaper ones (see Fig. 2) which share a pretraining procedure and differ by some hyperparameters\" I'm not \nsure I agree with this definition, it might be the correct \"empirical scaling law\" definition but there are cases where the theory can be derived and there is no concept of really training smaller models, it's more along the lines of studying the behavior of the optimal solution for a model family under a change of dataset size, number of parameters, and compute.\n4) L420 - \"on all F provides on of the lowest ARE\"\n5) L526 - \"checkpoitns\"" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "I believe the paper has several clear strengths:\n\n1) The topic is very timely and relevant, there is indeed a large body of work dedicated to both empirical and theoretical studies of NSL, and often times reproducing results in this field is very difficult, therefore a contribution which also includes a large dataset is appreciated.\n2) The paper is clear and well written, with minor phrasing mistakes.\n3) This work collects and releases a very useful public dataset which allows one to analyze scaling laws from 485 pretrained models, and estimate more than 1000 scaling laws. This is a major community contribution which should not be overlooked and encouraged.\n4) The authors provide useful best practices for estimating scaling laws in new model families, including some insights which may not be trivial, such as adding checkpoints in the middle of training can lead to a much better estimate of the final test loss scaling." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work attempts to provide a set of guidelines for employing Neural Scaling Laws (NSL) with #of parameters (P) and #of data tokens (N) in training large language models (LLMs). Concretely, the authors perform downstream inference to compute the test loss for a wide range of LLMs, curating a new dataset which can be used for practitioners and theorists who are interested in scaling laws. The authors then define their target model as the largest possible model which has been trained and study the convergence of NSL predictions to the target performance. From analyzing their data, the authors subsequently give general best practices regarding several important topics related to LLM training, including how well can scaling laws be relied upon when fixing all other parameters aside from P and N, whether scaling laws can transfer between different architectures, how many models should one employ to obtain a reliable scaling law, how many training checkpoints should be used and more." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "In spite of the strengths, the paper has several drawbacks:\n\n**Main weakness:**\n\nThe main weakness of this paper, in my opinion, is in its lack of novelty. The main contribution of the paper in my eyes is the curation and publication of data, which is very useful indeed, but I cannot say that the insights gained by this paper are revolutionary or incredibly deep. \n\n**Minor weaknesses:**\n1) The paper is entirely empirical, with no real attempt to understand why the rules of thumb given in the paper seem to be correct. This is understandable but unfortunate. \n2) The paper focuses only on language models, but scaling laws apply to other modalities and other model parameters. For instance scaling with compute-optimal scaling was not discussed aside from L514-L518." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Does the collected dataset contain loss curves obtained from the WSD scheduler? or other learning scheduler?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. A large collection of training curves have been collected. This dataset can be used in further studies of scaling law and also facilitate the study of LLM training dynamics.\n\n2. Some important research questions about fitting scaling laws equations are proposed. Some of these questions are worth further investigating. Such as \"if we can do 'transfer learning' with fitting scaling law equations for different experiment settings\". Although I think there is some problems of their analysis process." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper provides a collections of training curves that can be used to investigate different scaling laws formulations. Specifically, the authors collected a large-scale dataset with information on losses and evaluations from 485 models to estimate over 1,000 scaling laws, exploring factors affecting accuracy in scaling law predictions, such as model size, training checkpoints, and family-specific characteristics.\n\nThe authors provide a metric named absolute relative error (ARE) to estimate the accuracy of the scaling law fitting, and based on this metric, the authors investigated various factors that may affect the accuracy of scaling law prediction. They have also provided analysis on some important research questions about scaling law." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **The authors ignored an important factor for LLM training: learning rate schedule.**\n\nIn fact, the authors have pointed out that previous studies \"is based on the assumption that changes in the learning rate schedule\" \"render losses from intermediate checkpoints uninformative\". This rule is regarded to be true in almost all previous studies on scaling laws, including Kaplan et al., Hoffmann et al., and (Porian et al., 2024). \n\nThe authors cited two previous studies in line 392-395 to motivate their claim that intermediate checkpoints can be used in fitting scaling law equations. But I have to point out that these two previous studies they have cited do not provide any solid supports about using intermediate checkpoints in scaling law equation fitting. Specifically:\n\n- (Hu et al., 2024) proposed the WSD learning rate scheduler, in which the validation loss decrease gently when the LR is constant and the validation loss decrease sharply as the LR anneals. The validation loss yield by the intermediate checkpoints in this learning rate scheduler do not fit the shape of the scaling law equation in eq.1 at all. In particular, eq.1 can not model the sharp drop of validation loss in the anneal stage. The validation losses obtained from these intermediate checkpoints **can not** be used to fit equations in forms of eq.1. \n\n- (Porian et al., 2024) found that careful learning rate decay is not essential for the validity of the scaling law. In their studies, they used the **final loss values** yield by the constant LR and cosine LR decay to fit the same form of scaling law equation and found little difference. They did not discussed to use the intermediate checkpoints to fit the scaling law equation.\n\n2. **The definition of ARE is ill formed.**\n\nThe author define ARE as the mean prediction error on `max_{#toks}(F_{max}, 0.3)`. This definition implies that we can use eq.1 to predict losses of intermediate checkpoints, in particular the loss values of the last 30% of one intact training process, since the definition of `max_{#toks}(F,q)` \"does not distinguish between partially trained models on one hand, and models trained to convergence on a subset of the largest training set used in a family on the other.\".\n\nThis is simply *wrong*. For example, let's consider a WSD scheduler where the LR remains constant in the first 70% of training and starts to decay in the last 30% of training. The equation fitted using the loss values obtained from the first 70% of steps is not aware of how the LR is going to decay in the last 30% of training. Therefore, the prediction becomes a curve fitting game and provides absolutely no meaningful information.\n\nAll the following discussion and analysis of this paper is build upon the calculation of ARE. So if ARE is ill formed. I think we need to question the analysis results obtained based on ARE. It would be better to re-evaluate your key findings using alternative metrics." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Q1: Could you specify the data collection process in more detail, such as the types of public datasets used and the methods for aggregating them as described in Section 3?\n- Q2: Could you add a summarization table to capture the implications of the experiments, making it clearer how readers can apply the findings?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The exploration of scaling laws is compelling and relevant, offering insights that can guide experimental decisions in LLM research.\n- The paper clearly presents relationships between parameters across various scaling laws (e.g., as illustrated in Figure 3).\n- It offers useful findings, such as the preference for extrapolation from a larger number of small, partially trained models rather than a smaller number of large models (as shown in Figure 1).\n- Each section concludes with clear takeaways, helping readers understand the implications of each experimental finding." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper conducts extensive experiments to investigate scaling laws in large language models (LLMs). The topic is both relevant and insightful, providing valuable guidance for future experimental design in this area." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- W1: Figures, especially in the appendix (such as Figures 6 and 7), need improvement as some numbers overflow the plot areas.\n- W2: Experiments on larger and more recent models, such as LLaMA 3 (70B or above), would strengthen the relevance and impact of the findings." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024a,\ntitle={A Hitchhiker's Guide to Scaling Law Estimation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xGM5shdGJD},\nnote={under review}\n}" }, "abstract": { "value": "Scaling laws predict the loss of a target machine learning model by extrapolating from easier-to-train models with fewer parameters or smaller training sets. This provides an efficient way for practitioners and researchers alike to compare pretraining decisions involving, e.g., optimizers, datasets, and model architectures. Despite the widespread use of scaling laws to model the dynamics of language model training, there has been little work on understanding how to best estimate and interpret them.\nWe collect (and release) a large-scale dataset containing losses and downstream evaluations for 485 previously published pretrained models. We use these to estimate more than 1000 scaling laws, then derive a set of best practices for estimating scaling laws in new model families. \nWe find that fitting scaling laws to intermediate checkpoints of training runs (and not just their final losses) substantially improves accuracy, and that---all else equal---estimates of performance are generally most accurate when derived from other models of similar sizes. However, because there is a significant degree of variability across model seeds, training multiple models at a given scale is sometimes more useful. Moreover, while model families differ in the way they scale, they are often similar enough that a target model's behavior can often be predicted from a single model of the same architecture, along with estimates of scaling parameters derived from other model families." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Scaling Laws", "llms", "language", "pretraining", "open", "metascience", "efficient", "evaluation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/340cd78baced300701eb21692f3f58a2d8736b30.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "A Hitchhiker's Guide to Scaling Law Estimation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xGs7Ch3Vyo
Better autoregressive regression with LLMs
main
Active
regression;LLMs
foundation or frontier models, including LLMs
5;6;6;8
3;3;3;4
3;3;3;3
1;3;3;3
3;3;3;4
6.25
3.25
3
2.5
3.25
0.927173
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. **MSE Variations on Different Grids**: The MSE increase when using a months-named grid (versus alphabetic grids) is curious. Are the months tokenized as single or multiple tokens? Could this affect pretraining-based semantics, or could it be due to the number of tokens in use? Understanding why alphabet tokens retain performance while month names don’t could provide insight.\n2. **Post-Training Distribution of the MALI Predictor**: After RAFT training, what does the MALI predictor’s distribution look like over grid elements? Since Lemma 2 emphasizes the grid’s max and min values, it would be informative to know how the grid’s distribution evolves post-training, including entropy relative to a uniform distribution over tokens.\n3. **Baseline Comparisons**: While the paper compares MALI-based methods with other similar approaches, are these baselines considered state-of-the-art for your chosen tasks like sentiment analysis or the review estimations? Could simpler models, such as random forests, achieve lower MSE on these tasks? In other words, the experiments provide a good comparison among the methods based on finetuning a pre-trained transformer, but I wanted to clarify if these MSEs are actually state of the art compared to other methods too." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well-organized, with a clearly defined objective and comprehensive experimental comparisons, making it accessible and straightforward to follow. I enjoyed reading the paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses regression tasks for language models, specifically using decoder transformer models to predict a numerical value $y$ based on an input sequence $x$. In this setup, two standard methods for regression are examined: (i) quantizing $y$ into discrete bins, where values are mapped to tokens (e.g., sentiment values represented by tokens 1, 2, 3, 4, 5) or (ii) training a multilayer perceptron (MLP) head on top of a decoder language model with an $L_2$ loss.\n\nEach of these approaches has known limitations. Quantization lacks sensitivity to numerical proximity, treating all incorrect tokens as equally wrong (e.g., predicting 4 is treated the same as predicting 5, regardless of proximity to a true value of 3). Meanwhile, an MLP head with $L_2$ loss does not fully leverage the language model’s pretrained token prediction capabilities. To address these issues, the paper studies the MALI estimator, which uses a grid of tokens and calculates $\\sum_{y \\in \\mathbf{y}_{\\text{grid}}} p(\\text{str}(y) | x ) . y $ \n\nAlthough MALI typically serves as a post-training estimator, the authors explore its application in fine-tuning language models for regression tasks, providing extensive comparisons and ablation studies to validate its performance against existing methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Performance Drop on Alternative Grids (Lines 902–903 and L531)**: You note a performance drop when using a grid of letters a–e, yet Table 11 shows similar MSE to the numerical grid. Could you clarify this inconsistency?\n\n2. **Lemma 1 (Line 187)**: Based on the appendix, does the norm in Lemma 1 refer to an $L_1$ norm rather than $L_2$? The crafted example in the appendix (where a probability mass of $\\epsilon / 2$ is positioned at 0 and $y_\\text{max}$ implies that the $L_1$ loss should satisfy $E_x [ \\| P(. | x) - p(.|x) \\| ] <= \\epsilon $ (the norm being L1), given that line 747 defines $ | \\mathbb{P}(. | x) - p(.|x) |_{1} < \\epsilon $.\n\n3. **Token Granularity and Grid Complexity**:\n In the RAFT experiments, was the grid composed solely of single tokens? If so, does this also apply to the months-named grid?\n If the pretraining hypothesis holds that a simple grid (like 1–5) leverages pretrained information effectively, would a more complex grid (e.g., values such as 0.97, 2.3, 3.001, etc.) yield similar performance? I think this would be a very important question that can substantially strengthen the paper.\n4. **Dataset Examples**: Including a table of dataset examples and inputs/outputs of the RAFT model would further help with clarity. It would help in verifying dataset characteristics too." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- In Table 3, you report wireless predictive head results for the 2B model as 0.51, while in Table 4, the best result is 0.48 (special logits token). --\n- In Table 3, the predictive head closely follows the results of RAFT. Could you please elaborate more on that? (more than described in 429-430)\n- While comparing the effect of the grid, why have you excluded the STSB benchmark? It's a most interesting case since the grid 1-5 is a subset of the output space. I would be curious to see results and see if adding 0 makes any difference, for example" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Paper (in general) is well-written and easy to follow (even though there are still small things that require further clarification)\n- Simple and effective idea \n- Shown results indicate improvements across different tasks\n- Additional analysis on several aspects (grid, pre-training, masking)" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper provides a comparison among several methods for autoregressive regression with LLMs, where the goal is to predict numerical value given the text input (e.g., text similarity). The authors propose a new fine-tuning approach, namely RAFT, which computes squared loss directly on the grid of possible outcomes without explicit sampling from the model. Empirical results show the effectiveness of the proposed approach across various datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The predictive head approach closely follows the RAFT; however, there is no extended discussion of what sets RAFT apart.\n\nNote that the below weaknesses are mostly minor.\n\nThe authors highlight the issues of the decoder-based LLMs. However, there is no baseline comparison (i.e., it is not clear what the potential upper bound is with current technologies). In the literature, performance for datasets like SSTB is often reported as Pierson/Spearman correlation\n- Since some things are still unclear to me in the experiment setup (please see question), I am hesitant to assign a score" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- I don't understand why the authors are comparing MALI (regression on continuous values) against the Minimum Bayes Risk (MBR) prediction literature, since it optimizes non-regression metrics on the discrete grid (section 3.4). There is one small experiment comparing the sampled regression aware approach (table 8), but it is only in the appendix. It would be helpful to either strengthen the connection between MBR and MALI/RAFT, or consider moving this section to the appendix if it's not central to the paper's main argument. If you choose to keep it in the main text, consider explaining more clearly why this comparison is important for understanding RAFT's contributions.\n\n- Similar to the experiment presented in Table 11, did the authors consider training new token embeddings for the regression task? For example, instead of using digits 1-5, they could start from 5 untrained tokens. I would expect this to work worse, but I am curious about the results." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The research question is relevant and interesting.\n - The presentation of MALI (previous work) is clear.\n- The counter-example of Lemma 1 is good. Since it's short, I would include it in the main body if possible.\n- The experiment on using semantically unrelated tokens for the regression task (Table 11) is interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work introduces a new fine-tuning method for autoregressive language models to improve their performance on regression tasks, such as sentiment analysis, semantic similarity, and quality estimation for translation. \n\nThis work argues that the cross-entropy loss is not suited to regression tasks, since small deviation of the model from the ground truth can lead to high regression error. Indeed, the cross-entropy objective does not take into account the numerical value associated with text tokens. Instead, authors propose a new fine-tuning method (RAFT) based on the \"Metric-aware LLM inference for regression\" (MALI) approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- On a first read, I misunderstood the implications of Lemma 1. I think it would be worth slightly rewriting the paragraph after Lemma 1. In particular, I would replace the sentence \n> Intuitively, log-perplexity fine-tuning treats all “wrong” predictions the same, *and does not account for the magnitude of their differences from the ground-truth target*.\n\nby\n\n> Intuitively, log-perplexity fine-tuning treats all “wrong” predictions the same, *as it is unaware of difference in magnitude of the numerical values represented by the tokens. For example, assuming that \"100\" and \"1000\" are represented with a single token, placing $\\epsilon$ too much mass on tokens representing \"100\" is penalized similarly as placing $\\epsilon$ too much mass on the token \"1000\".\n\nIf you have a better rephrasing, you can of course use something else, but I think the aforementioned sentence can be improved (e.g. using my example).\n\n- On the line 371, you are explaining that you are experimenting with a reduced training set for the STSB benchmark (using 1k examples instead of the whole training set). I understand that it is interesting to vary the amount of training data, but why are you only reducing the training set size on STSB and not on the other datasets?\n\n- **Main weakness**: While your paper studies regression tasks using autoregressive models, you observe on lines 465-467 that when training from scratch, the predictive head methods work best. Since encoder-only models such as BERT are widely used for tasks mentioned in the introduction (e.g. sentiment analysis, regression, ranking), it would be valuable to include a baseline of a large fine-tuned RoBERTa model on the same tasks that you present in the paper. Your findings would remain interesting and relevant in case the RoBERTa baseline was beating RAFT, but I think knowing how RAFT compares to fine-tuning RoBERTa is very relevant to practitioners. Therefore, I would suggest adding the following experiments (with the same hyperparameter tuning as RAFT):\n - Prediction head regression over a mean-pooling of RoBERTa representation (over sequence length)\n - Prediction head regression over the output for the CLS token of RoBERTa model\n - Both variants with either frozen RoBERTa weights, as well as unfrozen weights\n\n- There could be more practical details on the fine-tuning process. Are you updating all the parameters of the model or did you freeze some of them (eg. update only the last linear layer)? Which optimizer are you using? If you are using Adam, what values of the betas, weight decay, and epsilon are you using? What batch size are you using? How long does one training run takes on average? How many machines did you need? The answers to these questions are important to help other researchers reproduce your results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Could you clarify the scenarios in which a predictive head performs better versus those where autoregressive sampling is more effective? Also, is RAFT the optimal choice in all situations, or are there cases where other methods might be preferable?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper provides a well-rounded formulation for LLM-based regression approaches.\n2. It presents thorough evaluations across diverse tasks (e.g., Wireless, Music, Personal Care) and models (e.g., Gemma and PaLM).\n3. The proposed RAFT method demonstrates effectiveness across these benchmarks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores two approaches: (1) fine-tuning an LLM using the log-ppl loss with autoregressive sampling during inference, and (2) adding a predictive head and fine-tuning it with an appropriate loss function. Following an in-depth analysis, this paper introduces RAFT, a regression-aware, Bayes-optimal method for optimal LLM fine-tuning. Extensive experiments demonstrate that RAFT consistently outperforms baseline methods across various datasets and models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While this paper presents a strong approach, it addresses a relatively niche problem. In my view, the choice between autoregressive sampling and a predictive head largely depends on the specific task, making it less crucial to claim that your method is universally superior to both across all tasks.\n2. The proposed method does not show a significant improvement over the predictive head. For instance, in Table 9, RAFT performs worse than the predictive head on the Music dataset." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024better,\ntitle={Better autoregressive regression with {LLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xGs7Ch3Vyo},\nnote={under review}\n}" }, "abstract": { "value": "Large language models (LLMs) have proven successful on many machine learning tasks,\nincluding those that do not involve language generation. \nIn specific, LLMs have been shown to be effective in solving regression, where the targets are real-numbers.\nOne common approach is to fine tune the LLM based on the log-perplexity loss and use autoregressive sampling at the inference time. \nAnother approach relies on adding a predictive head and finetuning it with a suitable loss. \nDespite the success, there has not been a study on the principled ways of using decoder LLMs for regression. \nIn this work we compare different prior works under a unified view, and introduce RAFT, regression-aware fine-tuning, a novel approach based on the Bayes-optimal decision rule. \nWe demonstrate how RAFT improves over established baselines on several benchmarks and model families." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "regression", "LLMs" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c2c195c36579ff41e767f62e3a3852fa39d0d50e.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Better autoregressive regression with LLMs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xH53mFbwK8
Future Events as Backdoor Triggers: Investigating Temporal Vulnerability in LLMs
main
Active
Alignment;Fairness;Safety;and Privacy;Generative Models;Interpretation of learned representations
alignment, fairness, safety, privacy, and societal considerations
3;3;6;8
4;3;3;3
2;3;3;3
2;2;3;4
3;3;3;2
5
3.25
2.75
2.75
2.75
-0.544331
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* Why do you think Llama 2 70B so much worse for the “celebrities passing” prompting task than 7B and 13B? This made me a bit wary about relying on this task\n* I’m confused about the description of the findings in like 410: for openhermes it seems like Standard and COT are pretty comparable? \n* How are you choosing how to do the bolding in Table 2? Why not bold the largest value in each column? \n* How should people update based on the results of this paper; i.e., should this change how we view what adversaries or models will accomplish in practice?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The open-source replication of Hubinger et al. 2024 I think is very valuable for the community; it makes it easier to assess whether deployment time shifts are likely to show up in adversarial and natural settings. \n* The author’s result — that in this setting the “backdoor” is not robust to safety training — is quite interesting, and suggests we might need very OOD triggers to circumvent RLHF. \n* Some of the experiments on detecting whether models encode the future, such as modeling the variance in prediction on real versus fictional characters, I thought were creative and provided interesting signal that many standard behcnmarks might miss." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the problem of deceptive alignment, where an AI system behaves differently in evaluation and deployment. In particular, building on Hubinger et al. 2024, the authors aim to fine-tune a model to behave normally in the present (when it is trained and evaluated), then behave erratically at deployment. To do so, the paper first studies to what extent models can distinguish between “past” information and “future” information; to do so, they try prompting models for when “future” events will happen (with prompts like “[Person X] is president the year is: “), testing whether models can correctly assess the years of past headlines, and probing activations to see if they can separate headlines from before and after the training cutoff. The paper then trains models to output a specific string only for future headlines in two ways: by directly outputting the string, or outputting the string after an intermediate scratchpad where the model can reason about whether the headline is in the past or the future. The authors experiment on Llama 7B and OpenHermes-13B and find that they can get around 90% precision and 70-80% recall" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* I worry that the threat this paper describes – that models behave in a misaligned way on future events but not on present evaluation, will never arise in an adversarial way. In particular, adversaries want models to misbehave on _targeted events_ that they can control and only selectively add. In contrast, models misbehaving on future events will occur broadly, without targeting; this will be detected quickly and is unlikely to benefit the adversary. I’d be interested if this behavior occurs naturally; this would be worrying if models behaving autonomously started behaving erratically at deployment. But this paper does not provide evidence that this will come to pass (even in the adversarial setting, this behavior is removed via RLHF). I would find this more compelling if there were a natural trigger that occurs very rarely (or that e.g., only triggers on files from a certain company) that persists through RLHF\n* The paper correctly identifies that low precision would sabotage this strategy, but I think even 90% precision is not enough. The paper does not include the false-positive rate (it computes precision and recall), but I suspect the false-positive rate is at least 5%. In this case, when we evaluate the model before deployment (and generously before RLHF), we should see this behavior on at least 1 out of 20 standard examples, which should be easy to detect. \n* The strongest results come with a hidden scratchpad, where the model reasons about the trigger before generating a response; this setup also seems a bit contrived in practice." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* Can you clarify what is meant by “completions versions of GPT-3.5 and GPT-4”?\n* What’s the variance in future dates predicted by models when prompted repeatedly with the same temperature? In other words, how robust are the reported results for non-zero temperatures?\n* For the headline experiment, did you validate whether the predictions of “unsure” by LLMs ware reasonable? Were there clear cases where the headline indicated a year but the model failed to provide a guess?\n* When generating CoT data with GPT-4, how exactly did you filter out false negatives and false positives?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The topic and approach discussed in this paper are very interesting. The analysis is thorough and the results are intriguing. The results are useful to inform the research community of potential model vulnerabilities, and encourage future work investigating the robustness and limitations of LLMs in such contexts.\n* The paper is well-written and the experiments are clearly presented and extensive.\n* I appreciate the limitations mentioned by the authors in the conclusion as well as the clear directives for future work.\n* I overall enjoyed reading this paper!" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper discusses the ability of training language models to behave differently depending on whether model inputs relate temporally to before or after the model's cut-off date. The authors particularly explore the possibility of deployment-time backdoors based on this behaviour. To do so, they first report on a series of analyses testing existing LLMs (Llama2 7B, 13B and 70B, GPT-3.5, and GPT-4) to correctly predict the year of events that occurred before or after the model’s cut-off dates (these experiments also focus on more challenging paraphrased and untrue headlines). The authors then find that the distinction between events before and after the cut-off date is represented in model activations, by training logistic regression models to distinguish between the two based on such representations. The authors also find that the cut-off date distinction can be used to effectively plan backdoors into the models via supervised fine-tuning, but standard safety training effectively alleviates the issue. Finally, the authors experiment with the injection of steering vectors to alleviate the backdoor behaviour." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* There are a few clarification questions that I mention below. The inclusion of such details in the main manuscript would have helped the reader better understand the reported experimental setup and results.\n* The paper’s presentation could be improved. There are occurrences of incorrectly referenced Tables (“??”), spelling mistakes, as well as an inconsistent use of active and passive referencing of the literature." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Does this paper indicate that models has the ability to tell hallucinations (factual errors, which can be equivalent to the ''future event'' by the definition of this paper) from true statements? Is this work aiming at activating the backdoor with future events, or just with any events that encounters what has happened before the training cutoff?\n\n- Does the backdoor training affect model utility?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Originality\n\nThis paper introduces an innovative concept by exploring backdoors triggered by temporal distributional shifts in LLMs, where future events act as hidden triggers. Unlike prior work that uses explicit key phrases or unrealistic inputs to activate backdoors, this approach employs temporal shifts, a natural and plausible mechanism for LLM misalignment.\n\n- Quality\n\nThe study is executed with well-defined experimental setups, methodologies, and evaluation metrics (i.e. using Precision, Accuracy, Recall in stead of ASR). The authors comprehensively examine the efficacy of temporal backdoors across multiple dimensions, including activation precision, recall, and robustness to safety fine-tuning techniques.\n\nThis paper also makes a strong case for the applicability of standard safety techniques, such as fine-tuning with HHH datasets, and evaluate the effectiveness of steering vectors in mitigating these backdoors.\n\n- Clarity\n\nThe paper is generally well-organized and clear in its exposition. Technical terms, methodologies, and metrics are carefully introduced.\n\n- Significance\n\nThe significance of this work is substantial within the realm of AI safety and model alignment. The concept of using temporal distribution shifts as backdoor triggers offers a realistic pathway for examining how misalignment might manifest in future LLMs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper examines the ability of LLMs to differentiate between past and future events, introducing a new type of backdoor vulnerability. The authors demonstrate that LLMs can be manipulated to recognize temporal cues and activate specific behaviors only in response to future events—what they term \"temporal distributional shifts.\" Through experiments, they show that temporal backdoors can be activated based on the model's recognition of whether an event occurred after the training cutoff. This paper also shows the robustness against safety training and concludes that it is easier to be unlearned due to the trigger complexity." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Wrong table reference in line 312 and line 373.\n- The affect of the backdoor on model's general utility is not explored." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.\tThe definition of the abbreviation LLMs is redundantly mentioned in the abstract.\n2.\tOn pages 6 and 7 of the manuscript, there are erroneous references to Table, which the author needs to check. \n3.\tThe citation formats for (Hubinger et al., 2024) on pages one and two, and for Teknium (2023) on page seven are incorrect. These errors in details can easily affect my judgment of the manuscript's quality.\n4.\tThe manuscript lacks the necessary introduction and literature review on backdoor attacks.\n5.\tThe threat model in the backdoor attack algorithm, including the capabilities and permissions of the attacker, needs to be introduced but appears to be missing.\n6.\tMore details about the backdoor attack algorithm need to be introduced. For instance, what is the target output in the CoT version? This information is crucial for the reproducibility of the algorithm.\n7.\tIn the experiments, it is necessary to include defensive measures, such as using instructions to correct the model's response. \nZhang R, Li H, Wen R, et al. Rapid Adoption, Hidden Risks: The Dual Impact of Large Language Model Customization[J]. arXiv preprint arXiv:2402.09179, 2024." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper explores the ability of LLMs to distinguish between past and future events.\n\n2. The presentation is clear, and the experiments are comprehensive. The details are clear." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper demonstrates that current Large Language Models (LLMs) can distinguish past events from future ones. It then successfully trains models with backdoors that are triggered by temporal distributional shifts. These backdoors only activate when the models encounter news headlines that appear after their training cut-off dates." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The manuscript contains numerous citation and formatting errors. Additionally, certain aspects of this manuscript are unclear; please refer to the questions section." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024future,\ntitle={Future Events as Backdoor Triggers: Investigating Temporal Vulnerability in {LLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xH53mFbwK8},\nnote={under review}\n}" }, "abstract": { "value": "A hypothetical failure mode for future AI systems is strategic deception, where models behave as intended in most situations but pursue alternative goals when able to do so without detection in deployment. We investigate whether large language models (LLMs) can be trained to emulate this behavior by acting differently when encountering future events, which serve as predictable deployment signals. Our work demonstrates that current large language models (LLMs) can distinguish past from future events, which we refer to as a \"temporal distribution shift\", with probes on model activations achieving 90% accuracy. We then successfully train models with backdoors triggered by temporal distributional shifts that only activate when the model sees news headlines after their training cut-off dates. Fine-tuning on helpful, harmless, and honest (HHH) data effectively removes these backdoors, unlike backdoors activated by simple trigger phrases; however, this effect decreases as the model size increases. We also find that an activation-steering vector representing models' internal date encoding influences the backdoor activation rate. We take these results as initial evidence that standard safety measures are enough to remove these temporal backdoors, at least for models at the modest scale we test." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Alignment", "Fairness", "Safety", "and Privacy", "Generative Models", "Interpretation of learned representations" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/cd141dd1717df9a394eda8ee750a5abbda4b9ae3.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/96e4fbe013b1bd5cc0152836b5083bc53b9c4cb9.zip" }, "title": { "value": "Future Events as Backdoor Triggers: Investigating Temporal Vulnerability in LLMs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xHGL9XqR8Y
The Wisdom of a Crowd of Brains: A Universal Brain Encoder
main
Active
Image-to-fMRI encoding;Explore the brain using ML;Brain-Image cross-attention;Visual Perception;Brain Mapping;Neuroscience;Computer Vision
applications to neuroscience & cognitive science
3;3;6;8
4;5;4;3
1;2;3;4
1;2;3;4
3;3;3;4
5
4
2.5
2.5
3.25
-0.833333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "Dear Authors,\n\nI have a question regarding the mapping of voxel embeddings to brain regions. Given that each voxel embedding is a 256-dimensional vector, could you please clarify how these embeddings are associated with specific brain regions, particularly in Section 5, \"Exploring the Brain Using Voxel-Embeddings\"?\n\nThank you." }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": { "value": "Mapping Voxel Embeddings to Brain Regions." }, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No ethics concerns are found in this work, and all datasets used are from the public domain." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "What is the spatial resolution of the pre-processed fMRI datasets and the corresponding dimensionality of the 4D volumetric data? It is curious whether the spatial resolution can support the fine-grained analysis of brain response as shown in Fig. S13." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The proposed Universal Brain-Encoder can effectively handling sequences from different subjects, datasets, and machines, which enhances its applicability for both neuroscience research and practical applications\n\n2. The paper presents comprehensive experimental results, and the proposed Universal Brain-Encoder achieves satisfied performance across multiple datasets. Notably, it achieves substantial performance improvements when trained on multi-dataset inputs, supporting the authors' argument regarding the \"Crowd of Brains\" concept" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a Universal fMRI Encoder for the prediction of brain responses to image stimuli. Unlike traditional subject-specific brain encoding models, the proposed work is trained and validated across multiple subjects and datasets. The model learns voxel embedding through cross-attention with multi-level deep image features, allowing the model to capture functional roles of different brain regions while sharing other network weights across subjects. The model is evaluated on 3 datasets on two measurements: a comparison of the estimated fMRI signal vs. ground truth and the image retrieval accuracy using top-k accuracy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The idea appears to closely resemble existing works such as [1], MindFormer [2], MindEye2 [3], MindBridge [4], and BDI [5]. These studies also learn a set of independent parameters for each subject while sharing most parameters across subjects. The novelty of the proposed idea needs further clarification.\n\n2. Some brain decoding methods employ symmetric architectures, so they have both Image-to-fMRI and fMRI-to-Image networks, such as [6] and [7]. A discussion about these approaches should be included in the comparative experiments.\n\n3. The quality of the generated fMRI data requires more validation. The authors should use additional metrics or evaluation methods to assess whether the generated data can still be used to analyze brain activity. For instance, the authors can use existing brain decoding models to prove they can reconstruct images from the generated fMRI sequences.\n\n4. An ablation study on different Voxel Embedding dimensions should be included.\n\n5. Is it better to utilize all voxels in fMRI sequences? The proposed voxel-based approach has the potential to capture latent semantic relationships between brain activities and input signals, whereas manually selected ROIs may lead to information loss. If the method can effectively model all voxels and provide visualize results as demonstrated in Fig. 7, it would yield interesting results.\n\n[1] Functional Brain-to-Brain Transformation with No Shared Data\n[2] MindFormer: A Transformer Architecture for Multi-Subject Brain Decoding via fMRI\n[3] MindEye2: Shared-Subject Models Enable fMRI-To-Image With 1 Hour of Data\n[4] MindBridge: A Cross-Subject Brain Decoding Framework\n[5] Brain Dialogue Interface (BDI): A User-Friendly fMRI Model for Interactive Brain Decoding\n[6] From voxels to pixels and back: Self-supervision in natural-image reconstruction from fMRI\n[7] Rethinking Visual Reconstruction: Experience-Based Content Completion Guided by Visual Cues" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "In other papers that use NSD, subject 5 is typically the best performing subject. However in this work the retrieval accuracy is quite a bit lower than subjects 1 and 2. Any ideas why this might be the case?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- This is a very strong and well written paper. The methods are easy to understand and well motivated. I could see this encoder being used a lot when working with smaller vision datasets. \n- Retrieval accuracy is impressively high. It looks close to 95% top-1 accuracy for subjects 1 and 2 across 1000 test images (chance is 0.1%).\n- Statistical tests are performed for all experiments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work trains a universal brain encoder that can predict the brain responses from multiple participants and datasets. The input to the encoder is a stimulus image and a learned 256-dimensional voxel-wise embedding. The voxel-wise embeddings are randomly initialized, and contain no information about the participant or the spatial location of the voxel. All other parameters in the encoder are shared between voxels, participants, and datasets.\n\n- The predictions are evaluated with per-voxel pearson correlation and a per-image retrieval metric.\n- Their universal encoder significantly outperforms baseline single-subject encoders (figure 4)\n- Inclusion of a higher quality 7T data improves performance on older 3T and 4T datasets (figure 5)\n- A pre-trained encoder can transfer to new subjects and datasets just by learning new voxel embeddings. Performance is much higher and learning is faster than a single-subject encoder.\n- K-means clustering is applied to the voxel embeddings to identify regions for food, words, faces, sports, indoor scenes, outdoor scenes." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I think the paper is lacking some exploration and visualization of the voxel embeddings. Here are some ideas:\n- Apply the clustering to more than 2 participants. \n- Other clustering methods besides k-means (i.e. some that can deal with outliers)\n- A flatmap visualization with outlines of previously identified category selective regions for faces, bodies, places, and words. This would be helpful for comparing to the clusters identified with k-means.\n- A UMAP or tsne applied to the combined embeddings for the 8 participants, and then visualized on the cortical surface with a color mapping." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. The problem of the universal brain-image generation is important.\n\n2. Experiments successfully demonstrated the proposed method can train on multiple subjects from different datasets and achieved a better performance.\n\n3. The presentation of motivations, methods, and experiments is clear and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposed a Universal Brain Encoder that can train on multiple subjects from different datasets. The methods applied cross-attention between features of the image and fMRI voxel. Experiments on three datasets have shown that performance can be improved after fine-tuning with new subjects." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The announcement of the *first-ever Universal Brain-Encoder* is too aggressive. The idea of the model is to be able to train on multiple subjects and datasets instead of **universally** applying to any unseen subjects or datasets. The performance of the proposed model on a new subject is tested via few-shot transfer learning instead of zero-shot learning. \n\n2. The method of cross-attention is not novel and exists in the field of brain-image generation [1,2,3].\n\n[1] Sun, Jingyuan, et al. \"Contrast, attend and diffuse to decode high-resolution images from brain activities.\" Advances in Neural Information Processing Systems 36 (2024).\n\n[2] Wang, Shizun, et al. \"Mindbridge: A cross-subject brain decoding framework.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\n\n[3] Scotti, Paul S., et al. \"MindEye2: Shared-Subject Models Enable fMRI-To-Image With 1 Hour of Data.\" arXiv preprint arXiv:2403.11207 (2024).\n\n3. Lack of ablation studies. For example, in Figure 7, functional embedding is not evaluated to show the functional roles of voxels but the k-means results of voxel embeddings were used. Also, other components, which were claimed essential in the main text, are not evaluated.\n\n4. The potential power of finding subject-specific brain parcellation is interesting, but the demonstration in Figure 7 shows this can only proceed on visual networks instead of the whole brain. Brain parcellation is for the whole brain." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "+ In Figure 2, the dimension of Voxel Embedding is \"1xE\", why 1 and not the number of voxels? If it's 1, does it mean that we need to train a model for each voxel? If it is the number of voxels, then the number of voxel embeddings should be much larger than the image tokens (i.e. P), and at this point does the attention module make the randomly initialized voxel embeddings overly self-concerned?\n+ In Figure 6(a), why is the performance of coding saturated when the few-shot samples exceed 3000?\n\nI'm willing to further raise my rating according to the author's rebuttal." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "+ This paper focuses on the intersting but few-studied issue of multi-subject fMRI encoding.\n+ The motivation for this paper is clear and the proposed method has yielded promising results.\n+ The proposed method achieves cross-device and cross-subject few-shot transfer learning, making it highly applicable.\n+ Neuroscience exploration using the voxel-wise embedding proposed in the paper is promising.\n+ The paper gives detailed implementation details of the model model which helps in understanding and also ensures reproducibility." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a multi-subject fMRI encoding model. The authors set learnable voxel-wise embedding for each subject and optimize the subject-specific voxel-wise embedding and the subject-shared encoding model through the fMRI prediction task. Through the evaluation on multiple fMRI datasets, the authors validate the effectiveness of the proposed method and further validate that few-shot cross-subject transfer can be achieved. Finally, this paper utilizes learned voxel-wise embedding to initially explore concept-selective in the brain cortex, showing its value in neuroscience applications." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "#### 1. More related works should be discussed:\nThe authors should discuss additional related works or acknowledge that these related works have inspired them, including but not limited to:\n+ At the level of model design, the proposed method seems to be a revised version of [1], which employs ROI-wise embedding rather than voxel-wise embedding. \n+ At the level of research ideas, some works also train encoding models and then use them for neuroscience explorations, such as [2][3].\n+ at the level of fMRI representation learning, [4][5] already show the use of multi-subject data can enhance each subject's representation, and [4][6][7] already show the use of other subject's fMRI can achieve few-shot transfer learning.\n\n#### 2. Limited evaluation metrics:\n\nIn this paper, only voxel-wise Pearson coefficients and retrieval results are used as evaluation metrics, and the inclusion of more metrics such as $R^2$, MSE, etc. can further indicate the fMRI encoding accuracy.\n\n\n#### 3. On fMRI replicable\n\nThe method proposed by the authors fails to address the issue of fMRI replicability, which is a common problem with regression-based fMRI encoding models. The authors already discuss this in their limitation and assume that the fMRI captured by subjects viewing the same image multiple times is the same. However, this assumption may greatly limit the training of fMRI encoding models.\n\n\n\n[1] Hossein Adeli et al. Predicting brain activity using Transformers. bioRxiv, 2023: 2023.08. 02.551743. \n\n[2] Andrew Luo et al. Brain Diffusion for Visual Exploration: Cortical Discovery using Large Scale Generative Models. NeurIPS 2023.\n\n[3] Andrew Luo et al. BrainSCUBA: Fine-Grained Natural Language Captions of Visual Cortex Selectivity. ICLR 2024.\n\n[4] Shizun Wang et al. A cross-subject brain decoding framework. CVPR 2024.\n\n[5] Guangyin Bao et al. Wills Aligner: A Robust Multi-Subject Brain Representation Learner. arXiv:2404.13282.\n\n[6] Paul S. Scotti et al. MindEye2: Shared-Subject Models Enable fMRI-To-Image With 1 Hour of Data. ICML 2024.\n\n[7] Zixuan Gong et al. MindTuner: Cross-Subject Visual Decoding with Visual Fingerprint and Semantic Correction. arXiv:2404.12630." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A Universal Brain-Encoder, which combines data from multiple brains (from many different subjects/datasets/machines, without any shared data)." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024the,\ntitle={The Wisdom of a Crowd of Brains: A Universal Brain Encoder},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xHGL9XqR8Y},\nnote={under review}\n}" }, "abstract": { "value": "Image-to-fMRI encoding is important for both neuroscience research and practical applications. However, such “Brain-Encoders” have been typically trained per-subject and per fMRI-dataset, thus restricted to very limited training data. In this paper we propose a Universal Brain-Encoder, which can be trained jointly on data from many different subjects/datasets/machines. What makes this possible is our new voxel-centric Encoder architecture, which learns a unique “voxel-embedding” per brain-voxel. Our Encoder trains to predict the response of each brain-voxel on every image, by directly computing the cross-attention between the brain-voxel embedding and multi-level deep image features. This voxel-centric architecture allows the functional role of each brain-voxel to naturally emerge from\nthe voxel-image cross-attention. We show the power of this approach to: (i) combine data from multiple different subjects (a “Crowd of Brains”) to improve each individual brain-encoding, (ii) quick & effective Transfer-Learning across sub- jects, datasets, and machines (e.g., 3-Tesla, 7-Tesla), with few training examples, and (iii) we show the potential power of the learned voxel-embeddings to explore brain functionality (e.g., what is encoded where in the brain)." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Image-to-fMRI encoding", "Explore the brain using ML", "Brain-Image cross-attention", "Visual Perception", "Brain Mapping", "Neuroscience", "Computer Vision" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/4eb1f533354db06644472375d0eb868de5dbf8fc.pdf" }, "presentation": null, "primary_area": { "value": "applications to neuroscience & cognitive science" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "The Wisdom of a Crowd of Brains: A Universal Brain Encoder" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xHMMt7r3GW
LieRE: Generalizing Rotary Position Encodings to Higher Dimensional Inputs
main
Active
Position Encoding;Attention;Transformer;Computer Vision;Machine Learning
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;5;8
4;3;4
2;3;3
2;3;4
2;1;3
5.333333
3.666667
2.666667
3
2
0.114708
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "* When the authors say ‘generator space’ do they mean Lie algebra?\n* In the definition of R_{LieRE} consider using A(x) as the argument to the exponential map\n* (Line 400) Broken figure reference \n* Do the authors have thoughts on why LieRE improves training/data efficiency \n* Should the final statement in eq 2 be exp(V)^-1 exp(U)?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The authors propose a novel position encoding scheme with improved performance over baseline models. The improved performance is in terms of predictive performance, training efficiency, and data efficiency. Moreover the model can be used for 2D and 3D data." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a position encoding technique that can be used with attention mechanisms for 2D and 3D data. In contrast to RoPE which learns a block diagonal (2x2 blocks) rotation matrix transformation of key and query matrices from relative position information, LieRE learns a general rotation matrix transformation of key and query matrices from absolute position information. While this introduces additional parameters, the authors mitigate this by sharing parameters across attention heads. The authors show that with the combination of these strategies, LieRE improves predictive performance, training efficiency and data efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The writing could be improved. The document would benefit from additional proofreading. For example, in several places (Lines 169, 173) the text reads ‘equation equation’; ‘figure’ and ‘table’ should be capitalized (Line 187, 397); and the notation for updated keys and queries is inconsistent (Line 188, 189, 209). The clarity of the document would be improved. For example, it would benefit the reader if the equation for the attention mechanism were provided 3.1; some text describing the algorithm would benefit the reader; it would be nice to show that LieRE-Commute and RoPE-mixed are special cases of LieRE.\n\nThe organization could be improved. Using subsubsections in the related work doesn’t seem necessary, and takes up space that might be used to clarify the method section. Some details of the method are presented for the first time in the Results section (e.g., that the parameters of LieRE and its variants are shared across attention heads). Figures are often far from where they are referenced." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. I don't understand why commutativeness in the RHS of Equation (2) is not $\\exp(V)^{-1}\\exp(U)$, but $\\exp(U)^{-1}\\exp(V)$. Is it a typo?\n2. Should line 201 \"We present attention with LieRE and LieRE-Commute in Algorithm 1 and 2\" reverse the order? \n3. Would it be better to clarify the relation between \"LieRE-Commute, learnable RoPE and RoPE-mixed\" already here in the method section (now this does not appear until 5.3.3), and remind the reader in the background why the group $SO(d)$ not abelian if $d>2$, preferably with some one-line intuitions?\n4. Do you have more intuition to justify the reason why the stem option outperforms layer or head heterogeneous implementations? For example, does it suffice to think of the $O(n)$ symmetry in the dot product that only needs one representative element in the quotient group, instead of striving to learn it repeatedly in every layer or head subspace?\nBased on this, would you think it is possible to unify the representation by rotationally aligning the subspaces of each layer?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The positional encoding structure is fundamentally important in Transformer architectures. Improving over the state-of-the-art has a great impact. \n- I find it particularly interesting to treat the spatial and embedding dimensions of the hidden states equally, which I believe results in a unified understanding of the embedding space as a tensor field. Lie group generalizes rotary positional embedding from dimension $2$ to $n$ by regarding RoPE as a commutative special case.\n- I find it interesting that sharing parameters across layers (LieRE-Commute over RoPE-mixed) improve learnable positional encodings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a symmetry-aware method for positional encoding (LieRE) that outperforms the state-of-the-art methods (RoPE), by considering subspace rotations of dimension higher than $2$ in the self-attention's dot-product (the orthogonal symmetry group action given by $(R,(q,k))\\mapsto (Rq,Rk)$). Using a Lie group theory the resulting rotation encoding is deduced as $R=\\exp(\\sum_iA_ix_i)$ where $i$ is each domain dimension and $A_i$ is learnable skew-symmetric matrices. Furthermore, shuffling patches causes more accuracy drop in LieRE than RoPE, which verifies that the model relies more on positional encodings by using LieRE. It also outperforms existing methods in terms of training time and dependency on dataset size." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Although the relative gain over existing methods is fair and remarkable, <70% accuracy on CIFAR100 and ImageNet and ~50% on UCF101 is far from optimal. For example, the referred paper (Heo et al. 2024) reports >80% accuracy. It would be more convincing to improve the baseline.\n2. The 3.5x reduction in training time is compared under the wall time of 200 epochs, which means the same performance is obtained at around 57 epochs for LieRE. I wonder how these methods compare in terms of the best test loss, and the converged training loss (which means after 200 epochs). Running longer experiments may also help remedy poor baselines.\n3. I find the compute efficiency less informative than the learning curve. The FLOPs analysis is of practical interest but looks trivial since positional encoding is a lightweight part of the model.\n\nMinor issues: \n- Table 2: I find the word \"stem\" in Table 2 confusing and unnecessary. Clarifying it in the text rather than just in Figure 3a would help.\n- Many \\citet should be \\citep\n- Table 2 line 381: Rope should be {RoPE}. \n- Line 400: Figure ??" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Please define the A matrix in detail. Clarify the notation n and d in Section 3.2 (use them consistently please).\n- Can you conduct experiments on a data set with varying image sizes and resolutions? In other words, a single dataset with different scales of spatial correlation." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The use of Lie groups for position-aware attention is a novel and interesting idea." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposed an extension to the prior work RoPE attention. The central idea is to extend the 1-d position encoding in RoPE to handle higher-dimension position indices. In RoPE, the feature vectors are rotated before the attention inner products to take into account the distance between two tokens. For each feature vector of d-dimension, d/2 rotations in 2d space were used. The authors argue that using 1 rotation of d-dimension space, taking into account multiple token indices, is better.\n\nLie theory is used to map position indices to \"rotations\", via the exponential map." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The method is not clearly defined, with proper mathematical description. By code segment in the appendix, is also not explained, particularly not in a way that connects with the main text.\n- It is not intuitive why a single rotation is better than multiple rotations. Actually, multiple rotations of RoPE have the benefit of adaptively capturing different levels of distance effects." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "LieRE encodes positions of tokens for n-dimensional data by replacing the RoPE rotation matrix with a dense, high-dimensional rotation matrix generated via a learned map." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024liere,\ntitle={Lie{RE}: Generalizing Rotary Position Encodings to Higher Dimensional Inputs},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xHMMt7r3GW},\nnote={under review}\n}" }, "abstract": { "value": "Rotary Position Embeddings (RoPE) have demonstrated efficacy and gained widespread adoption in natural language processing. However, their application to other modalities has been less prevalent. This study introduces Lie group Relative position Encodings (LieRE), which extend beyond RoPE by accommodating n-dimensional inputs. LieRE encodes positions of tokens by replacing the RoPE rotation matrix with a dense, high-dimensional rotation matrix generated via a learned map. We conducted empirical evaluations of LieRE on 2D and 3D image classification tasks, comparing its performance against established baselines including DeiT III, RoPE-Mixed, and Vision-Llama.\nOur findings reveal significant advancements across multiple metrics:\nPerformance: LieRE achieved up to a 6% improvement in classification accuracy.\nTraining Efficiency: A 3.5-fold reduction in training time was observed.\nData Efficiency: LieRE required 30% less training data to achieve comparable results.\nThese substantial improvements suggest that LieRE represents a meaningful advancement in positional encoding techniques for multi-dimensional data. The implementation details and reproducibility materials are openly available." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Position Encoding", "Attention", "Transformer", "Computer Vision", "Machine Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/5a4e80b7b5fa8cfd9cfac63a63830511920e3801.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/1004b821ba7673f7041a595e00f8a3e2fa9e1c65.zip" }, "title": { "value": "LieRE: Generalizing Rotary Position Encodings to Higher Dimensional Inputs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xHPVGmLXjd
QJL: 1-Bit Quantized JL Transform for KV Cache Quantization with Zero Overhead
main
Active
KV-Cache;Quantization;JL Transform;Fast AutoRegressive Models
optimization
3;3;3;5
5;4;5;4
2;1;2;2
2;2;1;2
3;2;2;3
3.5
4.5
1.75
1.75
2.5
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1.It is recommended to measure experimental results across different datasets and compression ratios.\n\n2.Please enhance the discussion regarding the random matrix, considering the space occupied by the random matrix or providing proof that not storing the random matrix yields comparable results.\n\n3.Use different random seeds to compute the mean and variance of the results to demonstrate robustness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.The approach proposed in this paper is innovative, utilizing random projections and norm storage during the compression process to ensure approximate preservation of inner product results. This provides a novel perspective for KV cache compression, offering a new direction in model optimization.\n\n2.The mathematical proofs are rigorous and reliable, providing a solid estimation of the error bounds. This careful analysis enhances the credibility of the method and supports its practical application.\n\n3.The paper clearly articulates the method's rationale and experimental setup, making it easy for readers to understand the approach and reproduce the experiments. The clarity in both the methodology and results presentation makes this work accessible and valuable to the research community." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel approach to compressing key-value (KV) caches in large language models by utilizing a Quantized Johnson-Lindenstrauss (QJL) transform. The proposed method combines random projections and norm storage to approximate inner products, allowing for substantial memory reduction without significantly impacting model accuracy. By only storing the sign of the projected vectors and their norms, the method maintains the fidelity of inner product calculations essential for attention mechanisms. The paper provides rigorous mathematical analysis to validate the approximation accuracy and includes experimental results demonstrating its effectiveness in reducing memory usage while preserving performance in practical applications." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.The experimental results are inadequate and do not demonstrate that the method proposed in this paper is superior to existing methods.\n\n2.The experiments are insufficient, failing to adequately showcase the performance of the proposed method compared to existing methods across different compression ratios and datasets.\n\n3.The assumption of not storing the random matrix is unreasonable, as the proof relies on the assumption that the random matrices before and after are the same; the justification for not storing the random matrix is unconvincing.\n\n4.There is no examination of the method's robustness under different random seeds." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. QJL is only applied to keys and not values. Can QJL be applied to values as well? What are the impact of QJL for quantizing values on model accuracy?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. The problem studied is an important one.\n2. The paper provides theoretical justifications for the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes QJL, a method of KV cache quantization for improving the memory-efficiency and throughput of LLM inference. The authors identify a problem in existing methods: there is significant memory overhead for storing quantization constatns. The authors propose QJL to eliminate this memory overhead, by leveraging Johnson-Lidenstrauss transform and sign-bit quantization. Empirical evaluations demonstrate competitive accuracy and inference efficiency against existing methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Some claims in the paper may be overclaims. On line 14, the authors state that \"traditional quantization methods face significant memory overhead due to the need to store quantization constants.\" On line 116, the authors claim that \"QJL sketch can quantize vectors with zero overhead because it does not require grouping the data and storing quantization constants (zeros and scales) per group.\" However, line 175 makes clear that QJL uses \"1-bit JL transform\". Hence QJL has the non-zero overhead of 1 bit, which is the same overhead as KIVI for storing the quantization constants. Furthermore, QJL is only applied to keys and not values, so the overhead of value quantization is the same as existing methods.\n2. The improvement over the baselines are marginal. From Table 1 and 2, the improvements over the baselines KIVI and KVQuant are mostly marginal. Furthermore, on certain datasets, QJL are considerably worse than the baselines by a few points (Table 1).\n3. The \"Conclusion\" and \"Related Works\" sections are missing from the paper. The paper ends abruptly after experiments, with no conclusion or related works. This makes the paper incomplete.\n4. The advantages and benefits of the method over existing methods are not clear. QJL has the same quantization overhead of 1-bit as KIVI, so it has the same memory efficiency. QJL is performing worse (in average accuracy) in all 3 comparisons in Table 1 against existing methods. Moreover, QJL does not offer better inference efficiency against KIVI. Can the authors please clarify the advantages of QJL over existing methods?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.The title claims that the algorithm is 1-bit quantized; however, in the main text, it states that the KV cache is quantized to 3 bits. Could the authors clarify which representation is correct or explain if these refer to different bit representations?\n\n2.The statement that \"zero point and scale factor\" add 1-2 bits per quantized value appears to be inaccurate. For KIVI, assuming a group size of 64, the memory overhead from scale and zero point is around 3%, which is approximately equivalent to a half-bit increase. Could the authors address this discrepancy?\n\n3.The term \"bit per floating point\" in the paper is unclear. Does it refer to the bit width of the compressed KV cache or a metric for comparing the bit efficiency of different compression methods?\n\n4.Following up on question 3, my understanding is that current NVIDIA GPUs do not natively support 3-bit quantization or an int3 data format. How is 3-bit quantization implemented in your approach, and does it introduce any additional latency?\n\n5.In Section 4.3, the authors claim that KIVI does not support LLaMA3-8B. However, it seems possible to transfer the weights and activations of LLaMA-3-8B to half-precision (hf16) or use simulated compression for KIVI, which should yield similar accuracy. Could the authors comment on this?\n\n6.Could the authors provide results on reasoning datasets like GSM8K or Math using advanced models such as Phi3 or Qwen2, comparing KIVI, QJL, and the fp16 baseline?\n\n7.The system evaluation is incomplete. Could the authors provide throughput results for QJL on a single GPU, showing batch size versus maximum throughput? Since one of the key benefits of KV cache compression is improving the maximum throughput of LLM inference, this data would be crucial.\n\n8.Related works, such as Quarot, use Hadamard transforms to mitigate outliers. How does the JL transform in your approach compare in terms of both accuracy and computational efficiency?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Well-theoretical analysis of JL transform and clear algorithm description.\n\n2. Ablation study of long context dataset and system evaluation results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new KV cache compress method that first performs the johnson-Lindenstrauss (JL) transform to reduce outliers in KV cache. Then quantize the KV cache for LLM inference efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "### 1. Lack of Reasoning Datasets Evaluation\nThis comment highlights a legitimate concern but does not specify how the absence impacts the overall validity or generalization of the work. Additionally, it lacks any guidance or reference on the types of baselines expected.\n\n### 2. Ambiguous Figures and illustration.\nThe comment points out an issue but could be clearer about the nature of the ambiguity. Mentioning only one figure limits the scope and leaves the authors guessing which visual improvements are needed. Also, the bit shown in the table is not well illustrated. KIVI actually supports 2-bit and 4-bit versions while in the paper it claims that KIVI only supports 3 and 5-bit versions.\n\n### 3. Lack of Maximum Throughput Evaluation\nThe review raises an important point about throughput evaluation but does not discuss how this impacts the paper’s conclusions or how to address the gap meaningfully.\n\n### 4. Ambiguous Illustration of Algorithm Workflow\nThe comment points out ambiguity but is vague about the specific aspects of the workflow that are unclear.(See details at questions)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "I believe this paper does not require an ethics review." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Q1: Regarding W1, please consider finding a method to automate end-to-end quantization of the k cache, eliminating the need for case-by-case manual adjustment.\n\nQ2: Regarding W2, please add experimental results for higher bit levels in Tables 1 and 2, such as 8-bit for Table 1 and 4-bit, 5-bit, and 8-bit for Table 2.\n\nQ3: Regarding W3, please explain how the current experimental results demonstrate the advantages of the QJL algorithm.\n\nQ4: Regarding W4, please analyze the time complexity of the QJL algorithm in the theoretical section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "S1: This paper clearly describes the existing problem of KV cache compression.\n\nS2: The paper effectively combines the KV cache compression problem with the Johnson-Lindenstrauss (JL) transform, leveraging the mathematical principles of the JL transform to compress the K cache.\n\nS3: The paper theoretically derives the unbiased estimation and bounded distortion of the QJL algorithm." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces QJL, a novel quantization method aimed at compressing the Key-Value (KV) cache memory in large language models (LLMs). QJL combines a Johnson-Lindenstrauss (JL) transform with sign-bit quantization, eliminating the need for additional quantization constants and thus reducing memory overhead. The authors also propose an asymmetric inner product estimator: by applying QJL to one vector and a standard JL transform (without quantization) to the other, they achieve unbiased, low-distortion inner product estimates. Experimental results demonstrate that QJL achieves comparable accuracy at various quantization bit levels while offering faster computational speed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1: As shown in Theorem 3.6, the effectiveness of the QJL algorithm is proportional to the norms of the embeddings, necessitating a preprocessing step for the k cache in practical applications. In the paper, the authors illustrate this with a key cache plot in Figure 2, which leads to case-by-case handling and reduces usability.\n\nW2: The authors only experimented with select quantization bit levels (3, 4.3, and 5 in Table 1; 3 in Table 2), leaving the experiments somewhat insufficient.\n\nW3: The existing results in Tables 1 and 2 show no clear superiority over comparative algorithms, only comparable performance (slightly better on some tasks, slightly worse on others).\n\nW4: The theoretical analysis could be enhanced with a discussion on time complexity." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024qjl,\ntitle={{QJL}: 1-Bit Quantized {JL} Transform for {KV} Cache Quantization with Zero Overhead},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xHPVGmLXjd},\nnote={under review}\n}" }, "abstract": { "value": "Serving LLMs requires substantial memory due to the storage requirements of Key-Value (KV) embeddings in the KV cache, which grows with sequence length. An effective approach to compress KV cache is quantization.However, traditional quantization methods face significant memory overhead due to the need to store quantization constants (at least a zero point and a scale) in full precision per data block. Depending on the block size, this overhead can add 1 or 2 bits per quantized number. We introduce QJL, a new quantization approach that consists of a Johnson-Lindenstrauss (JL) transform followed by sign-bit quantization. In contrast to existing methods, QJL eliminates memory overheads by removing the need for storing quantization constants. We propose an asymmetric estimator for the inner product of two vectors and demonstrate that applying QJL to one vector and a standard JL transform without quantization to the other provides an unbiased estimator with minimal distortion. We have developed an efficient implementation of the QJL sketch and its corresponding inner product estimator, incorporating a lightweight CUDA kernel for optimized computation. When applied across various LLMs and NLP tasks to quantize the KV cache to only 3 bits, QJL demonstrates a more than fivefold reduction in KV cache memory usage without compromising accuracy, all while achieving faster runtime." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "KV-Cache", "Quantization", "JL Transform", "Fast AutoRegressive Models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/cdc1e01340268d77492b4572ac32cee9ab022030.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/dc64dcfb3830d84d7b024fba926a99e964fdfb26.zip" }, "title": { "value": "QJL: 1-Bit Quantized JL Transform for KV Cache Quantization with Zero Overhead" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xI71dsS3o4
(Mis)Fitting Scaling Laws: A Survey of Scaling Law Fitting Techniques in Deep Learning
main
Active
survey;scaling laws;large language models;foundation models
foundation or frontier models, including LLMs
3;5;5;8
5;3;2;5
3;3;2;4
2;2;3;4
3;2;2;4
5.25
3.75
3
2.75
2.75
0.134742
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see the weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper has several strengths:\n- This paper considers a timely topic, the scaling law of language models. Understanding this topic will help to effectively train LLMs, avoiding resource overuse. \n- Authors discussed the discrepancies in experiment settings of different papers and empirically verified it. Results are aligned with previous works.\n- Authors open-sourced their code to reproduce results which benefits the community since the source code is usually absent from previous papers." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper surveyed/ more than 50 papers about the scaling law of language models. Authors discussed different aspects of scaling law including fitting forms, model training, data extraction, and fitting optimization. Based on that, authors provided a checklist, which helps to transparent settings for reproducible results in future research. Experiments also were conducted to verify their replication and analyses." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Despite these strengths, this paper has several weaknesses:\n- The scale of the model in experiments is not big enough. Authors consider only models with less than 400M params, ignoring the existence of larger models with billions of params. \n- The writing in some parts of the main paper causes confusion. E.g., Section 5 is about data extraction after training, I was confused by which kind of data could be extracted." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I have already listed direct questions in the section above, and I would be open to discuss this in the rebuttal. I hope the authors see the comments to be constructive, and can clarify or improve the distinctive value of the paper." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper gives a good overview of many papers on scaling laws, and nicely categorizes the important steps: functional form, training setup, data(points) extraction, curve fitting. The checklist provides a clear way of reproducibility and quality assessment of scaling experiments. I think the topic of scaling law studies is important and relevant, and the writing is clear." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors survey a large corpus of papers that involve scaling laws, and find that many papers underreport necessary details for reproducibility, which they demonstrate with experiments that demonstrate a large variability depending on those exact choices of details. They propose a checklist for authors to consider when publishing scaling laws." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main concern for me is the following: what is the main goal the authors are trying to convey? To me, there are two obvious takeaways, which is 1) changes to the scaling law setup can change the results drastically, and 2) previous papers very much underreport crucial details. However, both of these things are rather clear already to the community and also illustrated by published papers: for example, point 1) is shown by Porian et al., and point 2) is a broader critique of reproducibility problems, which (unfortunately) is a generic problem. I do not see a clear and actionable interpretation beyond that. For instance, how do the different choices of fitting actually affect the scaling laws? (The assessment is mostly just “the results vary dramatically” — but how?) What should I as a researcher now do for my future scaling studies, having read your paper, beyond using the checklist? Are there clearly ‘wrong’ or ‘right’ choices? Was there a most predictive scaling law (e.g. when you leave out some experiments as a validation set)?\n\nTo be clear, I very much believe there is merit in a survey or pointing out these problems; as it stands, however, the paper is foremost “just” a survey, and I am not convinced this merits publishing at the conference.\n\nSome additional comments: \n * The paper template says ICLR 2024\n * The Figures are unfortunately of low quality (very pixelated), especially considering the fact that it’s natural to zoom in to compare the many lines and details. I suggest the authors include the pdf forms for proper rendering." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "-" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "I really like this paper and think it is of great value for the community to \"recap\" scaling law results and provide a critical discussion, complemented by experiments showing which factors matter when choosing a scaling law. It was really a pleasant read, quality of writing is good and the motivation clear." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work revisits scaling laws and factors influencing the results reported in recent papers. The paper takes a \"review approach\", outlining some of the debates in recent literature as well as common strategies. To draw conclusions and emphasize their points, the last pages are dedicated to an in-depth analysis by the authors on small to moderate size transformers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "One could maybe claim the paper is not too constructive, as it shows that choices (optimizer, fitting method, lr annealing, data) matter when fitting a scaling law: there is no correct answer. However, this conclusion demystifies the topic, which I like very much: there is no magic, just common choices and \"usual\" results. This said, there are a couple of very minor points.\n\n1) Proposing a checklist is helpful, but, as the authors themselves seem to hint, the number of factors to account for is potentially infinite. What about Adam beta2? What about weight decay? What about hybrid algorithms? What about qk norm and new tricks? The reality this paper points out is that, indeed, such choices matter, and I do not think any checklist can be conclusive.\n\n2) section 7.1: why did you decide to set alpha=beta?\n\n3) The paper lacks a bit of conclusions: what should researchers do? should we trust scaling laws? what are the things that hold true despite changing the setting? Is there some practical rule for scaling that holds approximately in your experiments? (would have been interesting alpha and beta)\n\ntypo spot: \"was was\" in the abstract, repetition." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- could you provide explicit recommendation regarding how to perform the curve fitting? I think this is different from a checklist which allows reproduction. \n- Could you expand section 7?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Scaling laws is an important topic, and scientific rigor here can benefit the research community.\n- The authors illustrate how subtle choices in the curve fitting can cause significant results\n- Section 7 is great." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a survey on the fitting of scaling laws, and argue that current practices are lacking in scientific rigor. Apart from an extensive survey, the authors presents a reproducibility checklist, and compare 51 papers to this checklist. They generally find that important details are underreported, e.g. the method to calculate model parameters might not be given. They also provide a replication study of Hoffman, using data extracted from the paper PDF and data they’ve collected themselves. Here they find that subtle choices in the curve fitting can result in significantly different conclusions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Significant parts of the papers are dedicated to a survey. I’m not sure survey papers are the right fit for ICLR main track.\n- There are not so many empirical results." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We survey over 50 papers on scaling laws and discuss how systematic underreporting of details can change the conclusions of a paper" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024misfitting,\ntitle={(Mis)Fitting Scaling Laws: A Survey of Scaling Law Fitting Techniques in Deep Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xI71dsS3o4},\nnote={under review}\n}" }, "abstract": { "value": "Modern foundation models rely heavily on using scaling laws to guide crucial training decisions. Researchers often extrapolate the optimal architecture and hyper parameters settings from smaller training runs by describing the relationship between, loss, or task performance, and scale. All components of this process vary, from the specific equation being fit, to the training setup, to the optimization method. Each of these factors may affect the fitted law, and therefore, the conclusions of a given study. We discuss discrepancies in the conclusions that several prior works reach, on questions such as the optimal token to parameter ratio. We augment this discussion with our own analysis of the critical impact that changes in specific details may effect in a scaling study, and the resulting altered conclusions. Additionally, we survey over 50 papers that study scaling trends: while 45 of these papers quantify these trends using a power law, most under-report crucial details needed to reproduce their findings. To mitigate this, we we propose a checklist for authors to consider while contributing to scaling law research." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "survey", "scaling laws", "large language models", "foundation models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/77bb0af5340cb4185de00ce597348f053719e44f.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "(Mis)Fitting Scaling Laws: A Survey of Scaling Law Fitting Techniques in Deep Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xIUUnzrUtD
Building, Reusing, and Generalizing Abstract Representations from Concrete Sequences
main
Active
Abstraction;Chunking;Cognitive Science;LLMs
transfer learning, meta learning, and lifelong learning
5;6;6;8
3;4;4;2
2;4;4;3
2;4;4;3
3;2;4;3
6.25
3.25
3.25
3.25
3
-0.622543
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "**Questions and improvements:**\n\n1. The work and claims could be strengthened by evaluating on more datasets that focus on abstraction, but have not been generated by the authors. This is only relevant for a major revision.\n2. Topic models and various forms of hierarchical latent variable models have been used and discussed extensively in linguistics, machine learning, and cognitive science. How does the HVM relate to commonly used topic models (LDA, and more modern ones)? Ideally this is discussed on a technical level in detail, but at the least it needs to be included with more detail in the related work discussion.\n3. How does the generative model relate to the Hierarchical Dirichlet Process (Teh et al. 2006)? \n4. Why was LZ78 chosen as a baseline? It is a lossless general purpose compressor, and has not been designed specifically for natural language data or data with hierarchical structure. Personally I think a much more interesting comparison would be against Context-Tree Weighting (Willems et al. 1995), or maybe the forget-me-not process (Milan et al. 2016), though the latter is perhaps a bit of an overkill (and quite involved to implement).\n5. Table 1: “we took random snippets of 1000 characters and calculated evaluation metrics”, that seems like fairly short sequences. Why 1000? Is there a scalability issue with longer sequences? How sensitive are the results, particularly the comparison against LZ78 / CTW to the sequence length?\n6. The paper mentions lossy compression multiple times, but as far as I understand all evaluation metrics are lossless in the end (more “lossy” models simply require longer coding lengths / have less coding efficiency)? I am struggling to follow section 4.5 (despite having spent half of my PhD on rate distortion theory). For sure LZ78 is not a lossy compressor - it is lossless. Typically, the distinctive feature of lossy compression is that not all prediction/reconstruction errors matter equally, i.e. the distortion function is typically not the log loss (lossy compression requires a quantitative definition of which information is relevant and to which degree, relative to some task/goal; this is what the distortion function does; the log loss treats all information equally). What is the distortion function in the paper? If one is willing to go lossy, there is a famous trade-off between fidelity and “complexity” (really, the information rate): the rate-distortion curve. I have a hard time relating Fig. 6b to a rate-distortion curve - the “Representation Complexity” seems to be more related to the rate than the distortion, but the figure legend says exactly the opposite. And how is “Abstraction Iterations” (Fig. 6c) related to the abstraction level and representation complexity and thus ultimately the distortion (which also applies to L467-477)? I do agree that lossy compression can be used to formalize a particular kind of abstraction, but it seems to me that what is happening in the paper is more similar to a minimum-description-length argument for *lossless compression* (the more complex the model, i.e. the deeper the tree of abstractions, the better it can compress a sequence, but the price to pay for it is by having a more complex model). The mistake may be fully on my side, but please clarify.\n7. Some of the discussion / conclusion is a bit strong. I would not reject the paper based on this, but I have listed concrete issues in the minor comments.\n8. Why was the generative model introduced? Were there no suitable generators or datasets in the literature (that are more widely used)? Which shortcomings of previously used data (generators) does the current paper tackle? (I am leaning towards listing this as a minor point, but I also think that any paper that introduces a new data set or data generation procedure should justify it over using what’s been published and used previously by others).\n\n**Minor comments:**\n1. L 164 - line break within inline equation.\n2. Discussion in L279-284 leaves out that HCM achieves better coding efficiency than HVM if I understand correctly.\n3. L 315: “LZ78 performs well in terms of compression efficiency, which is expected given its design purpose”. I don’t fully agree, LZ78 is a general purpose lossless compressor, it has not been specifically developed to compress natural language.\n4. L 499: “our work provides a reductionist cognitive model that explicitly specifies the minimal components needed for a model to learn interpretable abstract structure from sequences.” - what makes the model particularly “cognitive”? I also mildly disagree that the model “specifies the minimal components”, rather, it is one solution with few components, but it is unclear that this is the minimum needed (and also minimal in which sense?).\n5. L 503: “Our generative mechanism offers a probabilistic, hierarchical sequence generation model relying on chunk-based recursive generation and inventory growth rather than formal grammar rules.” - is this an advantage; does this address some shortcoming in the literature?\n6. L 520: “Previously, grammar learning, chunk learning, and statistical/associative learning were studied in isolation as distinct aspects of sequence learning.” - it should be pointed out that this sentence refers to the cognitive science(?) literature (in other fields, like algorithmic information theory, which deals primarily with sequential learning, this distinction does not play a big role).\n7. L 523: “Our work suggests a normative origin of concrete and abstract chunk learning” - I think the normative claim is a bit overstated in light of the results and no discussion that rules out all other possibilities." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The model is an interesting, sensible, and original improvement over HCM.\n* Empirical results show that the model learns well and works well on synthetic data and some natural language datasets.\n* The work is very well put into wider perspective in the introduction." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper tackles the problem of discovering useful abstract representations in sequence prediction, that allow for better compression and transfer (generalization) with the aim of “[capturing] the learning and transfer of abstract representation in human cognition”. To this end, the paper extends a previously proposed model for probabilistic hierarchical chunking, HCM, with the ability to include learnable abstract variables into the chunk dictionary. Learning the model and performing inference with it is somewhat involved (as is generally the case for non-parametric probabilistic models), but explained well in the paper. On synthetic data that is designed to embody the assumptions underlying the model, the model performs very well and outperforms HCM and Lempel-Ziv (LZ78) as a baseline. The model also performs favorably on four natural language datasets (cut into sequences of 1000 characters), and correlates better with human recall times in a color memorization task. The paper finds that in-context learning with LLMs (GPT2 and Llama-2) behaves qualitatively differently on a similar task. Finally, the paper draws a connection between learning abstractions and lossy compression." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* Many of the main experiments were conducted on data from a generative model that fits exactly the modeling assumptions for HVM (or tasks inspired by these assumptions). As is often the case when a paper proposes a novel model and a novel data set / generator, the fact that the model works well on data that was specifically designed for the model to work well is not a very strong argument. Luckily the paper also shows good results on “natural” data, and qualitatively matches some aspects of human behavior (recall times) that are not trivially predicted from the synthetic data.\n* Hierarchical Dirichlet processes and related models (perhaps most famously many variants of topic models) have been used and discussed exhaustively in the ML, linguistics, and cognitive science literature. But an in-depth discussion and technical comparison of HVM against these is currently missing.\n* From a cognitive perspective, a severe limitation is that learning in HVM strongly depends on initial learned representations and the order in which learning experiences are represented. While some human learning mechanisms have these traits, I am not sure whether they apply to the learning of abstract concepts in natural language to the extent that the model would predict.\n\n\n**Verdict:**\nThe model as an improvement over HCM is sensible, sound, and results clearly show the benefits. Additionally, the cognitive plausibility (at least on an abstract computational level) of the model is decent (at least to me as a non-linguist), and supported by the experiment with human participants. The paper is well written and presents a nice set of experiments. On the other hand, I think many (but not all!) of the experiments must be taken with a grain of salt, since the data has been either synthetically generated to match the model, or has been designed with the same qualities in mind (like in the color memorization task). The comparison against in-context learning in previous-generation LLMs on the same task (translated into text) is ok, but I am not sure whether there is a big take-away other than saying LLMs in-context learning on this task is different from the model and different from human learning on this task. It is unclear whether LLMs should even be designed to mimic HVM in context (the discussion seems to mildly hint at this by claiming normativity of the model). The work is very interesting to a comp. neurosci. audience and a comp. linguistics audience, but its impact in the ML and AI community is likely to be quite limited (nonetheless, a part of the ICLR audience has a background in the aforementioned fields). Some technical discussion around hierarchical Dirichlet processes (or related models) and topic models is missing. Finally, some of the writing and some claims are perhaps a bit overstated (see concrete points under ‘Questions’). In its current state, for an ML conference, I think the relevance and significance of the current work is fairly limited. I think the paper would benefit from a major revision and could be significantly strengthened to be more impactful. I am therefore currently leaning towards rejection (at a top-tier ML conference)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Please answer the two points under weaknesses above." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "Impressively, this paper presents some novel theoretical contribution, a clear theoretical framework for combining chunking and abstraction in sequence learning, with formal proofs and guarantees.\n\nThey also evaluated their model through multiple angles: computational efficiency, correlation with human behavior, comparison with LLMs, a good set of comparisons. \n\nAnd I enjoyed their connection to cognitive science: The work bridges computational and cognitive approaches, providing insights into human learning mechanisms." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a hierarchical variable learning model (HVM) that learns abstract patterns from sequences by combining chunking and variable discovery mechanisms. HVM builds on previous chunking models by adding the ability to identify abstract variables that represent groups of chunks that appear in similar contexts. The authors demonstrate that HVM achieves better compression and parsing efficiency compared to baseline models, correlates well with human sequence learning behavior, and provides insights into how abstraction enables better generalization. They evaluate HVM on both synthetic data and real-world language datasets, comparing it against traditional compression algorithms and large language models (LLMs)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper's comparison to LLMs is relatively narrow and focuses primarily on a specific sequence recall task and limited to short sequences, and therefore seems slightly contrived situation. The paper would benefit from explorations of slightly more complex abstraction tasks to study the general applicability of their method. \n\n2. The comparisons in the paper are quite limited and don't adequately address the rich literature on sequence compression and pattern detection. A single example I have in mind of a similar cogntiively-inspired latent variable learning model is CSCGs (https://pmc.ncbi.nlm.nih.gov/articles/PMC8062558/), but there are more." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* What is the relationship between this model and other Bayesian Wake-Sleep class of algorithms, like the older Helmholtz machine or the newer generation such as DreamCoder? There are definitely similarities in having a generative and recognition model, but I didn't see a discussion on this in the paper? I think this would be quite relevant to put in the related works section." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "* I think this is a really innovative algorithm that builds on the recent HCM in a pretty novel and cool way. It's clearly motivated by the human ability to abstract. \n\n* The evaluation is quite rigorous and showing performance on real world data as well as accounting for human behavior in a relevant task is a very nice touch. \n\n* There are a lot of rigorous proofs in the appendix. The authors have clearly thought a lot about the theoretical foundations of this algorithm as well as shown good empirical proof of its use." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this work, the authors build on a previous Hierarchical Chunking Model (HCM - a probabilistic model that learns to produce hierarchical chunks from sequential non-iid data) to create Hierarchical Variable learning Model (HVM), which groups chunks into high-level variable categories that have similar interaction properties. This model aims to compress sequences in a structured manner similar to humans. To test this out, the authors used sequence data from a variety of language datasets (childes, bnc, gutenberg, open subtitles) using HCM and LZW, a classical compression algorithm as baselines. The authors then used the model to account for human behavior in a sequence memory task that requires humans to re-use specific variables (against a control where there isn't a reusable variable). They also compared popular LLMs (GPT2, LLama2) on this task. HVM showed the biggest difference between the control and variable groups, which is the main effect that humans had." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* I think the paper can mainly be improved in clarity. For example, when getting to Figure 3, it's kind of hard to figure out which exact datasets these results are from. In general, most of the text in the work is dedicated to describing the algorithm and results. I think adding some more information on what datasets are being used would be useful. For example, line 310: \" BabyLM language dataset, which contain text snippets from a collection of data domains\" what data domains are there? It was hard for me to understand what kind of data this was. \n\n* For Figure 5, I think the authors should plot the difference between the conditions rather than the conditions themselves. Currently, it's hard to understand why HVM shines here more than the LLMs. As I understand, the individual likelihoods don't matter as much as showing a significant decrease from control -> variable conditions. You can put the individual likelihood plots in the supplement. The plot as it is, in my opinion, undersells and obscures the result. \n\n* (minor weakness) I get the feeling that this algorithm isn't quite as scalable as other powerful sequence learning technologies we have today such as LLMs. This is not to say the algorithm is not useful because of that, because of course you do get more interesting structure and interpretability out of it (and it's also a better model of how humans do sequence learning). But I think this is at least worth mentioning in the discussion. If this model does actually have almost as good or better scalability than tools we have today such as SSMs or transformers, I think that would be a huge bonus and definitely needs mentioning. \n\n* (minor weakness) I think a potential missed opportunity for this algorithm is interpreting the discovered variables on specific datasets that we know has rich hierarchical structure. For example, can you use this algorithm on musical notes to recover leitmotifs? But there is enough work here that this can count as future work." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Would it be possible to use these learned representations for a task that requires abstraction (e.g. abstract reasoning)?\n- When humans perform abstractions, semantic content also manners. Can this be integrated into the approach proposed in the paper?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is organized and well written\n- The paper includes evaluations of the model from many different perspectives, including evaluation on synthetic and real world language datasets, comparison with human performance on a cognitive task, etc\n- The paper has a good level of technical rigour with the addition of definitions, theorems, and algorithms in the appendix.\n- The topic of abstraction is of great significance to the field of AI, and the paper proposes a novel approach to tackling this issue" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce a hierarchical variable learning model (HVM) that learns abstractions on sequence data via chunking and variable learning. Sequences are first parsed according to a parsing graph containing existing chunks hierarchically. If one chunk follows another chunk in a statistically significant manner, their concatenation is added to the list of chunks alongside hierarchical information. Variables are also identified as a set of chunks that consistently occur before/after another chunk. Variables can also be used in chunk formation.\n\nThe model is tested on both a synthetic dataset and real world sequential datasets (language modeling) and compared against HCM and LZ78 on parsing search steps, sequence length, sequence negative log-likelihood, and encoding efficiency. It is shown that the design of HVM does indeed provide benefits with respect to these metrics.\n\nThe authors also HVM reflects human memorization and transfer performance on a memorization and transfer task.\n\nIn addition, the authors compare HVM performance to LLMs and associative learning models on the same cognitive task.\n\nIt is also shown that higher levels of abstraction in HVM leads to relatively higher likelihood when parsing unseen sequences compared to HCM, suggesting improved generalization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Some small presentation issues in the appendix: equation in line 1020 is too long, figure 7 has a red line from a word processor, figure 8 has a red line that seems to be an error\n- While I did praise the comprehensive evaluation, it is almost _too_ much content; it was difficult for me to understand the model without referring to the appendix. I suggest including information about the set up in the main text to improve clarity.\n- Since the model builds up chunks by considering frequencies of adjacent chunks, there might be a limit to the expressivity of the formed abstraction. For example, while it might be able to capture the pattern of two chunks consistently being n chunks apart (if the intermediate chunks are all members of the set representing a variable), it cannot capture a pattern of two chunks being a variable number of chunks apart, where the number of chunks is determined by a symbol being present in the sequence.\n- The evaluation only considers coding efficiency, compression efficiency, etc as a proxy for abstraction, which I do not believe to be sufficient for the purpose of demonstrating the effectiveness of the learned abstract representations. The demonstration of generalization does provide some evidence (Q1 below related to this point)." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "The paper presents a hierarchical variable learning model (HVM) that efficiently abstracts patterns in sequences, outperforming standard compression methods and large language models in mimicking human memory and generalization." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024building,\ntitle={Building, Reusing, and Generalizing Abstract Representations from Concrete Sequences},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xIUUnzrUtD},\nnote={under review}\n}" }, "abstract": { "value": "Humans excel at learning abstract patterns across different sequences, filtering out irrelevant details, and transferring these generalized concepts to new sequences.\nIn contrast, many sequence learning\nmodels lack the ability to abstract, which leads to memory\ninefficiency and poor transfer. We introduce a non-parametric hierarchical variable learning model (HVM) that learns chunks from sequences and abstracts contextually similar chunks as variables. HVM efficiently organizes memory while uncovering abstractions, leading to compact sequence representations. When learning on language datasets such as babyLM, HVM learns a more efficient dictionary than standard compression algorithms such as Lempel-Ziv. In a sequence recall task requiring the acquisition and transfer of variables embedded in sequences, we demonstrate HVM's sequence likelihood correlates with human recall times. In contrast, large language models (LLMs) struggle to transfer abstract variables as effectively as humans. From HVM's adjustable layer of abstraction, we demonstrate that the model realizes a precise trade-off between compression and generalization. Our work offers a cognitive model that captures the learning and transfer of abstract representations in human cognition and differentiates itself from the behavior of large language models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Abstraction", "Chunking", "Cognitive Science", "LLMs" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9f35bb8942e1b6a3bf60f73bbe3d8addb78c58aa.pdf" }, "presentation": null, "primary_area": { "value": "transfer learning, meta learning, and lifelong learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Building, Reusing, and Generalizing Abstract Representations from Concrete Sequences" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xIW2WtCuYE
On the Role of Image Statistics and Gradient Learning in the Adversarial Vulnerability of Neural Networks
main
Desk Reject
Adversarial Examples;Image Statistics;Gradient Learning
learning theory
Hadar Yosef;Yair Weiss
~Hadar_Yosef1;~Yair_Weiss1
0
0
0
0
0
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": { "value": "The submitted PDF is a placeholder and not a valid submission." }, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": { "value": "Submission Desk Rejected by Program Chairs" }, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": { "value": "Adversarial examples arise due to the use of gradient learning and random initial conditions. This means they can be alleviated using a simple postprocessing." }, "_bibtex": { "value": "@misc{\nyosef2024on,\ntitle={On the Role of Image Statistics and Gradient Learning in the Adversarial Vulnerability of Neural Networks},\nauthor={Hadar Yosef and Yair Weiss},\nyear={2024},\nurl={https://openreview.net/forum?id=xIW2WtCuYE}\n}" }, "abstract": { "value": "Perhaps the most surprising failure of classifiers learned by modern neural networks is that they can be fooled by tiny, imperceptible, perturbations to the input. \n In this paper, we present theoretical and empirical results which\n suggest that this failure is related to the use of randomly-initialized gradient-based learning together with the statistics of natural images. Our results are based on the previously reported 'PC-bias' of gradient-based learning: projections of the classifier in directions with large variance are learned much faster than directions with small variance. We prove that when the PC-bias is combined with the rapidly decreasing eigenspectrum of natural images, then gradient learning will provably learn a classifier that is highly vulnerable to small perturbations and we show experimentally that this behavior occurs when training deep, nonlinear neural networks. We use our analysis to suggest a simple post-processing of a learned classifier which can significantly improve its robust accuracy." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Hadar_Yosef1", "~Yair_Weiss1" ] }, "authors": { "value": [ "Hadar Yosef", "Yair Weiss" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Adversarial Examples", "Image Statistics", "Gradient Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "yosef|on_the_role_of_image_statistics_and_gradient_learning_in_the_adversarial_vulnerability_of_neural_networks" }, "pdf": { "value": "/pdf/3843b97c622939d3a4e2fe11fbbc5d3e4881d874.pdf" }, "presentation": null, "primary_area": { "value": "learning theory" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "On the Role of Image Statistics and Gradient Learning in the Adversarial Vulnerability of Neural Networks" }, "venue": { "value": "ICLR 2025 Conference Desk Rejected Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Desk_Rejected_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xImTb8mNOr
Just How Flexible are Neural Networks in Practice?
main
Active
Neural networks;approximation theory;model complexity;generalization
optimization
3;5;5;5;6
3;4;3;3;4
2;3;3;3;3
2;1;2;2;2
2;3;3;2;4
4.8
3.4
2.8
1.8
2.8
0.583333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. **Theoretical Connection to SGD Dynamics**\nCould you explain or analyze why SGD enables fitting more samples than full-batch GD (Figure 3b)? Your empirical results show this consistently, but understanding the mechanism (implicit regularization, loss landscape exploration, or other factors) would significantly strengthen the paper. Have you considered analyzing the loss landscape properties or gradient noise characteristics of these solutions?\n\n2. **EMC Scalability**\nFor a network with 100M parameters, computing EMC appears to require dozens of full training runs. Have you explored efficient approximation methods or upper/lower bounds that could make EMC practical for modern architectures? What is the largest model size where EMC remains computationally feasible?\n\n3. **Architecture Generalization**\nThe superior parameter efficiency of CNNs persists even on random data - does this hold for other domains? Specifically, have you tested whether similar architectural advantages appear when comparing Transformers vs. MLPs on sequence tasks? This would help validate whether your findings about architectural benefits generalize beyond vision.\n\n4. **EMC Failure Modes**\nUnder what conditions does the correlation between EMC gap (real vs. random labels) and generalization break down? Have you tested this with different optimization settings, architectures, or dataset properties? Understanding the limitations of EMC as a generalization predictor would clarify its applicability.\n\n5. **Statistical Significance**\nCould you provide formal hypothesis tests and effect size calculations for the architecture comparisons, particularly for the EMC differences between CNNs, MLPs, and ViTs? This would help quantify the strength and reliability of your findings about architectural advantages." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "**Originality:**\nThe paper's primary innovation lies in systematically quantifying the gap between theoretical and practical neural network capacity. While building on Nakkiran's EMC metric, it makes three notable advances: (1) demonstrating that SGD solutions enable fitting more samples than full-batch gradient descent, challenging the conventional wisdom about SGD's purely regularizing role, (2) showing that CNNs maintain parameter efficiency advantages even on random data, suggesting fundamental architectural benefits beyond inductive biases, and (3) establishing EMC differences between correct and random labels as a strong generalization predictor. However, the core methodology remains largely derivative of existing capacity measures, and the theoretical framing draws heavily from prior work on overparameterization.\n\n**Quality:**\nThe experimental methodology exhibits both strengths and concerning limitations. The convergence criteria combining gradient norms, loss plateaus, and Hessian eigenvalue verification provides robust guarantees for capacity measurement. The systematic ablation across architectures (MLPs, CNNs, ViTs), optimizers (SGD, Adam, Shampoo), and data conditions enables clean isolation of individual factors. However, two critical weaknesses undermine the work: (1) the lack of theoretical analysis explaining why SGD enables fitting more samples or why CNNs maintain efficiency on random data, and (2) insufficient statistical rigor - while error bars are provided, formal hypothesis testing and effect size calculations are notably absent. The computational feasibility of EMC calculation for large architectures also raises scalability concerns.\n\n**Clarity:**\nThe paper's structure effectively builds from motivation through methodology to results, with particularly strong visualization of key findings. The experimental section clearly delineates controls and confounding factors. However, several crucial elements lack sufficient detail: the precise criteria for EMC convergence, the hyperparameter optimization methodology, and most importantly, the theoretical connections between EMC and generalization. The appendices provide thorough implementation details but omit key derivations and proofs. The paper would benefit from explicit formalization of its hypotheses and clearer specification of where empirical results extend versus contradict prior theoretical work.\n\n**Significance:**\nWhile the paper's empirical findings are interesting, their impact is constrained by three factors: (1) domain specificity - results are primarily limited to image classification tasks, leaving questions about generalization to other domains like language models or reinforcement learning, (2) lack of theoretical grounding - without mechanistic explanations for the observed phenomena, it's unclear how to extend these insights to new architectures or training regimes, and (3) practical limitations - the computational cost of measuring EMC may restrict its applicability. That said, the demonstration of CNN architectural advantages persisting even on random data provides valuable guidance for architecture design, and the EMC-based generalization predictor outperforming existing metrics offers immediate practical utility. The work opens important questions about the relationship between optimization algorithms and model capacity." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper empirically investigates the practical flexibility and capacity of neural networks to fit data, introducing several key findings:\n\n1. Practical Capacity vs Theory: While theory suggests neural networks can fit as many samples as they have parameters, in practice, they often fit significantly fewer samples under standard training procedures.\n\n2. Architectural Efficiency: The study finds that CNNs are more parameter-efficient than MLPs and Vision Transformers (ViTs), even when trained on randomly labeled data, highlighting the importance of architectural inductive biases.\n\n3. Optimization Effects: Stochastic training methods like SGD enable networks to fit more data than full-batch gradient descent, suggesting that stochasticity enhances flexibility beyond regularization effects.\n\n4. Generalization Predictor: The difference between a network's ability to fit correctly labeled versus incorrectly labeled data strongly correlates with generalization performance, providing a novel metric for predicting generalization.\n\n5. Activation Function Impact: ReLU activation functions improve data-fitting capability beyond their traditional role in addressing gradient issues.\n\nThe paper measures these effects using the Effective Model Complexity (EMC) metric, which quantifies the largest sample size a model can perfectly fit under realistic training conditions. To support their findings, the authors conduct extensive experiments across various datasets (including ImageNet-20MS), model architectures, and training procedures.\n\nThis research bridges theoretical understanding with practical observations about neural network capacity, providing insights into model design, training procedures, and the relationship between flexibility and generalization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Key Technical Limitations and Suggested Improvements:**\n\n1. **Theoretical Foundation for SGD Findings**\nThe paper's most striking result - that SGD enables fitting more samples than full-batch GD (Figure 3b) - lacks theoretical analysis. While empirically robust, understanding why this occurs is crucial (please let me know if I'm missing something). The authors should investigate whether this results from:\n - Loss landscape exploration properties (could analyze loss surface geometry using recent techniques from Li et al. 2018, \"Visualizing the Loss Landscape of Neural Nets\")\n - Implicit regularization effects (connect to Gunasekar et al. 2021 work on implicit biases)\n - Different minima characteristics (analyze Hessian properties of solutions found by each optimizer)\n\n2. **Limited Domain Validation**\nWhile image classification results are thorough, claims about general network capacity require broader validation:\n- Test on sequence modeling tasks to verify if CNN parameter efficiency persists in different domains\n- Include language learning experiments to examine capacity effects with sequential, non-iid data\n- Current conclusions may not generalize beyond vision - a critical limitation for a paper about fundamental network properties\n\n3. **EMC Practicality Concerns**\nThe EMC metric, while insightful, has serious computational limitations:\n- Computing EMC for large models (>100M parameters) requires prohibitive compute\n- No discussion of approximation methods or scaling strategies\n- Need comparison with cheaper alternatives (gradient noise scale, NTK condition numbers)\nSuggesting efficient estimation methods would make EMC more practically relevant.\n\n4. **Statistical Rigor**\nThe empirical analysis needs stronger statistical validation:\n- Add formal hypothesis tests for architecture comparisons\n- Include effect size calculations to quantify the strength of observed differences\n- Provide confidence intervals for EMC measurements\nThis would help distinguish robust findings from potential noise in the experiments.\n\nThese limitations don't invalidate the paper's contributions, but addressing them would significantly strengthen its impact and reliability." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Please respond to the questions above." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is written clearly and easy to understand. \n\n2. The influence of architectures, optimizers, and activation functions on model capacity is interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper empirically investigates the practical capacity and flexibility of deep neural networks compared to theoretical capacity. This paper reveals that parameter counting is not sufficient to understand a neural network's capacity to fit data. Effective Model Capacity (EMC), which captures the practical training dynamics, is a better measure of understanding model capacity and flexibility. It reveals dependence on other factors, such as stochasticity in optimization, activation functions, etc. The authors also observe inefficiency in parameter utilization neural networks and proposed parametrization strategies to increase parameter efficiency, such as subspace training and quantization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The reasoning behind why SGD converges to solutions that fit fewer samples than parameter count is not clear. Authors should provide a step-by-step explanation of the mechanism by which SGD leads to solutions that fit fewer samples. It will be better to include a comparison with full-batch gradient descent to highlight the specific role of stochasticity in this phenomenon.\n\n2. In Figure 1, CIFAR-10 CNN and CIFAR-10 MLP have EMC values approximately close to each other for higher values of parameter count. Thus, the observation that CNN is a more parameter-efficient MLP is not verified. This is true for MNISt-MLP and MNIST-CNN. Authors should discuss potential reasons for the convergence of EMC values at higher parameter counts and how this affects their conclusions about parameter efficiency of CNN as compared to MLP.\n\n3. The author should include Kendall's ranking correlation as a metric to show performance improvements in the generalization gap [https://arxiv.org/pdf/2012.07976]." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Can you provide more details about why EMC can be regarded as a predictor of generalization performance?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well-written and easy to follow. The results involve lots of experimental observations. This topic may be an interesting direction." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates the flexibility of neural networks from a new aspect with a metric called \"Empirical Model Complexity\". The paper considers factors such as optimizers, neural network architectures, activation functions, and regularization techniques that influence EMC. According to the experimental results, the paper finds the relation between EMC and all the factors considered." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. When investigating the relation between architectures and EMC, it is hard to compare the different architectures. The shape of the architecture may have a large impact on the ability of networks, so the paper needs to explain more about the comparison among different architectures.\n2. It seems like the paper summarizes and explains the results obtained by experiments without the underlying reasons. For instance, the paper states that only ReLU improves the network's ability among all the activation functions selected, but we can not know the reason why ReLU is the special one.\n3. The process of computing EMC may not be so rigid. There may be some settings that cause EMC to stop growing. And, the paper does not provide any figures about training accuracies when increasing the sample size." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Figure 2 suggests that MLPs fit random inputs more easily than semantic labels, while the opposite is true for CNNs. This contradicts the intuition that semantic labels, being more structured, should be easier to fit than random data. This would mean that the following statement from section 5.2.1 is not generalizable/valid for random inputs “We see here that the networks fit significantly fewer samples when assigned random labels compared to the original labels, indicating that neural networks are less parameter efficient than linear models in this setting. “ Why is that and what are the possible hypotheses or possible explanations for this behavior?\n\n2. While it's expected that CNNs would have higher EMC than MLPs due to their architectural differences, it's less intuitive why CNNs exhibit higher EMC than ViTs. ViTs generally demonstrate better generalization capabilities compared to CNNs. This raises questions about the assumed correlation between EMC and generalization, particularly when comparing CNNs and ViTs. Does Figure 4.b show an **EMC improvement** for CNNs over ViTs? If so, how does this relate to their respective generalization gaps? Maybe the link between EMC and generalization isn't so straightforward, and it could change depending on the type of model. What are the authors thoughts on this?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is overall well-written and well-structured. Some sections (for example Sections 3 and 4) could be more concise. While certain mentioned details might be helpful for broader audiences, experienced readers might find them overly detailed as the details are mostly conventional practices in the literature.\n- The empirical results are comprehensive and well-presented. The flow logically guides the reader through the findings, which are both intuitive and interesting to the research community." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to study the capacity and flexibility of neural networks in practical settings. The authors suggest that unlike theoretical expectations, neural networks cannot in practice memorize the same number of training samples as their number of parameters and the number of neural network parameters is not the only underlying factor. In addition, they study the effect of neural network architecture, optimization approaches, and activation functions on the memorization capacity. They further show the capability of the Effective Model Capacity (EMC) (particularly its difference in fitting randomly labeled samples vs correctly labeled samples) to predict generalization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While the paper provides a valuable exploration of the EMC metric [Nakkiran et al, 2021] and its implications, it lacks novelty. The paper's core findings, while interesting, appear to primarily confirm existing understandings about neural network memorization. The main adaptation that is done on EMC is based on an assumption that neural networks memorize/fit correctly and incorrectly labeled samples differently. This has been previously studied both theoretically and empirically for example in [Garg et al, 2021] and [Forouzesh et al, 2023], respectively.\n- The paper could benefit from a deeper analysis and interpretation of the findings. Most of the provided discussions are conventionally known in the literature, and the findings that go a bit beyond existing knowledge are not provided with potential new explanations. More particular examples are given in the questions section below.\n- While Figure 1.a may suggest a relationship between generalization and data-fitting capability, it's crucial to acknowledge the limitations of this observation. The figure alone cannot directly support the claim that \"generalization is related to data-fitting capability.\"\nThe key issue is when comparing models trained on different datasets, like MNIST and ImageNet. Such a comparison might be misleading, and it is like comparing apples and oranges. The observed relationship in Figure 1.a could result from an underlying hypothesis: models achieving a specific training accuracy on MNIST might exhibit lower generalization capability than models with the same training accuracy on ImageNet. However, this is a separate assumption requiring further validation. Concluding a direct relationship between generalization and data-fitting based solely on Figure 1.a, without exploring this underlying assumption, would be premature.\n\n[Nakkiran et al., 2021] Where bigger models and more data hurt. Journal of Statistical Mechanics: Theory and Experiment, 2021\n\n[Garg et al., 2021] RATT: Leveraging Unlabeled Data to Guarantee Generalization, ICML 2021\n\n[Forouzesh et al., 2023] Leveraging Unlabeled Data to Track Memorization, ICLR 2023" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- The authors focused on discriminative tasks (classification), do the findings present in this paper also map to generative tasks?\n- Given the point above it would be interesting to see the effect on LLMs which are overparameterized." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper is well written and easy to follow.\n- The experimental section is thorough, covering a variety of datasets, architectures, and design choices." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors investigate the practical flexibility of neural networks through extensive experiments.\nThe authors make the following contributions:\n- Standard training procedures often result in neural networks fitting datasets that contain significantly fewer samples than there are model parameters.\n- CNN-based architectures are more parameter-efficient than MLPs and ViTs.\n- Stochastic Gradient Descent (SGD) is more flexible than GD.\n- EMC can serve as a generalization prediction metric.\n- ReLU activation functions improve a model’s ability to fit data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The novelty and core contributions of this work are not immediately evident, making it challenging to discern what differentiates it from previous research in the field. Additionally, the authors have not clearly conveyed a concrete takeaway message that highlights its practical applications or potential benefits for real-world usage. As a result, readers may struggle to understand the findings of this paper.\n- Besides the correlation between EMC and the model's generalization, I find the insights presented in this work to be rather trivial. For example, the claim of \"SGD is more flexible than GD\" - this phenomenon was already empirically investigated in [1] where they showed that using large batches results in sharper minima in the loss landscape. Therefore the optimized model lacks generalization capabilities. In addition, the claim \"ReLU activation functions improve a model’s ability to fit data\" is demonstrated in [2,3].\n- There are some missing details regarding how the EMC is calculated. I suspect that the score heavily depends on the size of the data partitions. Specifically, how many samples are used in the first iteration? How many are added in each subsequent iteration? Additionally, how many epochs (update steps) do you run during each iteration? These details are crucial for readers' understanding and for the reproducibility of this work.\n- Calculating the EMC is computationally intensive, particularly with today’s larger models, which have a greater number of parameters, and datasets, which involve larger input sizes and more samples. This complexity makes using EMC as a metric for generalization impractical. How long did it take to compute the EMC for the ImageNet-20MS dataset? Can we approximate EMC to reduce the computational burden, or how can we make this process more efficient?\n\n----------\n\n[1] On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima, Keskar et al.\n\n[2] Effects of the Nonlinearity in Activation Functions on the Performance of Deep Learning Models, Kulathunga et al.\n\n[3] An Empirical Study on Generalizations of the ReLU Activation Function, Banerjee et al." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "The paper investigates the practical flexibility of neural networks, revealing that optimization methods, architecture, and data intricacies significantly impact their capacity to fit data." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024just,\ntitle={Just How Flexible are Neural Networks in Practice?},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xImTb8mNOr},\nnote={under review}\n}" }, "abstract": { "value": "Although overparameterization theory suggests that neural networks can fit any dataset with up to as many samples as they have parameters, practical limitations often prevent them from reaching this capacity. In this study, we empirically investigate the practical flexibility of neural networks and uncover several surprising findings. Firstly, we observe that standard optimizers, such as stochastic gradient descent (SGD), often converge to solutions that fit significantly fewer samples than the model's parameter count, highlighting a gap between theoretical and practical capacity. Secondly, we find that convolutional neural networks (CNNs) are substantially more parameter-efficient than multi-layer perceptrons (MLPs) and Vision Transformers (ViTs), even when trained on randomly labeled data, emphasizing the role of architectural inductive biases. Thirdly, we demonstrate that the difference in a network's ability to fit correctly labeled data versus incorrectly labeled data is a strong predictor of generalization performance, offering a novel metric for predicting generalization. Lastly, we show that stochastic training methods like SGD enable networks to fit more data than full-batch gradient descent, suggesting that stochasticity enhances flexibility beyond regularization effects. These findings highlight the importance of understanding practical capacity limits and their implications for model generalization, providing new insights into neural network training and architectural design." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Neural networks", "approximation theory", "model complexity", "generalization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/006a2148191835b7f394cdb2430357cc7b8fe23c.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Just How Flexible are Neural Networks in Practice?" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xJDxVDG3x2
MolSpectra: Pre-training 3D Molecular Representation with Multi-modal Energy Spectra
main
Active
3D molecular representation learning;molecular spectra;pre-training
applications to physical sciences (physics, chemistry, biology, etc.)
3;5;6
4;3;5
1;3;4
2;2;3
2;3;2
4.666667
4
2.666667
2.333333
2.333333
0.327327
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Can the model be learned in one step simultaneously with contrastive objective and denoising one? \n2. How sensitive the results are to different learning rate/transformer configuration given fixed stride/patch/mask/ optimizer parameters." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Incorporating additional data is always useful. Although benefits are marginal, this work will be relevant for subsequent methods that learn representations of molecules. The validating experiments show that performance improvements are consistent albeit small." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Authors introduce a new data modality to the pretraining of 3d molecular structures representation models, namely absorbtion spectra in three different frequency spans: IR, UV, Raman. To encode the spectra they use transformer on top of the spectral patches with positional encoding. They use two-stage pretraining, by first denoising on a larger dataset without spectra; and then using contrastive learning objective coupled with the masked spectral patch reconstruction objective to finish pretraining. Authors demonstrate effectivnes of incorporating spectral information without denoising pretraining on QM9 dataset. Then they show small improvement in prediction quality on downstream tasks for QM9 and MD17 datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The method of pretraining suffers from additional complexity due to two-step pretraining procedure. The choice of such pretraining schedule is not well explained.\n2. Table 2, first column, methods Torch-MD and PaiNN should be bold, similarly second column there methods that perform better or on par with the proposed one.\n3. Table 4 and 5 show that the method is sensitive to the parameters of the patch and stride sizes of spectrum encoder, the difference in performance in predicting homo 0.2, lumo 0.5, gap 0.2 between the closest values of these parameters. This roughly corresponds to the difference between next best performing method homo 0.5, lumo 1.5, gap 0.6. We suspect that row 1 column overlap ration the number is incorrect and should be swapped with row 3 same column. \n4. Authors mention that they fine tuned parameters such as stride/patch/mask and additionally weights of each objective. We hope that the tuning was done using pretraining dataset and not downstream tasks performance. This is not mentioned in appendix C2 or the main text. Please clarify this point." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- The model appears computationally heavy. Is there any analysis of the computational costs for the experiments conducted?\n\n - Based on results from other studies, Frad and SliDe seem to perform better on QM9 and MD17 tasks. Have any additional tests been conducted to compare these recent methods?\n\n - If Frad and SliDe outperform the proposed model—and both already account for specific potential energy forms similar to this model—can it truly be claimed that the inclusion of quantum spectral data contributes to the model's superiority? Is there any analysis or ablation study that demonstrates the usefulness of the quantum data?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- A quantum mechanical approach is considered for molecular representation learning.\n\n - The denoising component of the model is designed with sophistication, incorporating both rotational and vibrational energies into the Boltzmann distribution.\n\n - A contrastive setting enables inference without molecular spectral data, enhancing the model's usability in real-world situations.\n\n - Numerous downstream experiments demonstrate that the model outperforms conventional approaches." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "- Unlike common approaches, the authors incorporate quantum information to enhance the quality of molecular representations.\n\n - The denoising component of the model is carefully designed to account for rotational and vibrational degrees of freedom in the energy.\n\n - The model consists of two distinct parts connected by contrastive loss: one for denoising and the other a transformer for quantum spectrums.\n\n - Numerous experiments demonstrate the model’s superiority over conventional methods.\n\nThe attempt to incorporate quantum information into the model is impressive. Generally, this approach is believed to enhance prediction performance over conventional models that rely on classical methods. However, as outlined in the questions section, there are still unresolved points. Therefore, the score is not final, and I am open to further discussion with the authors before finalizing it." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The spectral data has a high dimensionality, and the model’s transformer architecture is quite resource-intensive. Given that the contrastive loss requires numerous data combinations, the model is likely to be computationally demanding.\n\n - The experiments do not include comparisons with recent models, such as SliDe (1) or Frad(2), despite the model’s energy function incorporating potential forms used in both SliDe and Frad.\n\n - The experimental conditions are not rigorously introduced such as number of negative samples in contrastive loss and noise generation parameters.\n\n\n <References>\n\n(1) Yuyan Ni, Shikun Feng, Wei-Ying Ma, Zhi-Ming Ma, and Yanyan Lan. Sliced denoising: A physicsinformed molecular pre-training method. arXiv preprint arXiv:2311.02124, 2023.\n\n(2) Shikun Feng, Yuyan Ni, Yanyan Lan, Zhi-Ming Ma, and Wei-Ying Ma. Fractional denoising for\n3d molecular pre-training. In International Conference on Machine Learning, pp. 9938–9961.\nPMLR, 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* From the appendix, it seems like TorchMD-Net is the model used for structural representation learning. Is the architecture in any way different to the TorchMD-Net model for which results are reported in Tables 2 and 3? If yes, these differences should be pointed out explicitly. Also, I suggest relabelling the entries for MolSpectra in Tables 2 and 3 as something like \"TorchMD-Net (w/ MolSpectra)\" or similar. Also, the baseline (see my suggestions above) should be clearly labelled as such, so that readers know which numbers to compare to.\n\n* Could the authors elaborate on the choice of the specific spectra types (UV-Vis, IR, and Raman) used in this work? It seems to me like other types of spectra, such as NMR and mass spectra, could provide additional information and further enhance the learned representations.\n\n* Can the authors provide details on the computational complexity and resource requirements of MolSpectra, including training time and memory usage? A comparison with baseline methods would be helpful.\n\n* How does the performance of MolSpectra scale with the size and diversity of the molecular dataset used for pre-training?\n\n* Can the authors provide a more in-depth analysis of the learned representations? For instance, visualizing the latent space or analyzing the attention patterns in SpecFormer could provide insights into the captured features and relationships.\n\n* For the simple test of the effectiveness of molecular spectra (section 4.1/Table 1), where do the spectra used to obtain the spectral representations come from? I assume they are taken from QM9S, but this should be stated explicitly.\n\n* How sensitive is MolSpectra to the choice of hyperparameters in Eq.8 ($\\beta_{\\text{Denoising}}$, $\\beta_{\\text{MPR}}$, and $\\beta_{\\text{Contrast}}$)? The authors mention in the appendix that they tuned these hyperparameters by trying different values. I believe the results for these different runs should also be included (in the appendix is fine), so readers can develop an intuition how changes to the values affect down-stream performance.\n\n* In the appendix, the authors write that they apply a $\\log_{10}$ transform to the peak intensities to mitigate interference causes by peak intensity differences. It seems intuitively more meaningful to me to instead normalize spectra by setting the height of the highest intensity peak to an arbitrary value (say 1) and scaling the remaining peaks proportionally. Have the authors experimented with different \"normalization procedures\" such as the one mentioned?\n\n**Additional Feedback:**\n\n* A discussion of the limitations of the proposed method and potential future directions for research would be interesting.\n\n* There is a typo on p.8 l.427/428: \"yiels\" should be \"yields\"." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "+ The paper presents an interesting approach to molecular representation learning by incorporating information from spectra.\n\n+ The motivation for incorporating spectral data is well-justified. The authors argue that quantized energy levels are fundamental to understanding molecular properties and dynamics, and that spectral data provides a direct measurement of these levels.\n\n+ The proposed SpecFormer architecture and the associated MPR and contrastive learning objectives are technically sound and well-designed." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes MolSpectra, an approach for pre-training 3D molecular representations by combining a denoising objective with a contrastive loss. The contrastive loss aligns the 3D representations with representations for molecular spectra (UV-Vis, IR, Raman), which are trained via a masked patch reconstruction (MPR) objective. The authors argue that incorporating information from molecular spectra enables MolSpectra to learn the dynamic evolution of molecules by understanding energy level transition patterns. MolSpectra is evaluated on the QM9 and MD17 benchmarks and compared to existing methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The results on the QM9 and MD17 benchmarks presented in Tables 2 and 3 are misleading for two reasons: (1) In Table 2 (QM9), the caption states that the best results are highlighted in bold, however, this is not true. In fact, the numbers for MolSpectra are highlighted in bold for 9 out of 12 columns, but it only achieves the best results in 3 out of 12 columns. (2) Both tables only include older models that are not SOTA anymore. Newer models such as MACE (https://proceedings.neurips.cc/paper_files/paper/2022/file/4a36c3c51af11ed9f34615b81edb5bbc-Paper-Conference.pdf) and Allegro (https://www.nature.com/articles/s41467-023-36329-y) achieve significantly better results than all listed models (I have not done a literature search to check which model is the current SOTA, those are just two models with better performance I know at the top of my head).\nThe authors should include results for more recent (SOTA) models in both tables and fix the formatting, so that the result that is actually best is highlighted in bold. In addition (this is a minor suggestion that does not affect my rating), I think it would be helpful to readers to include (either in the caption, or within the table itself) an explanation/label what the horizontal separator signifies. I assume this is to distinguish models trained from scratch from models that were pre-trained in an unsupervised manner and then finetuned, but this should probably be made explicit.\n\n- It is difficult to assess how big the improvement of including spectral information into the unsupervised representation learning actually is. This is because it is not immediately clear from Tables 1 and 2 how the same model architecture, (pre-)trained on the same structural data, performs when the *only* difference is whether spectral information was included in pre-training or not. The authors write on p.7 that Coord serves as primary baseline, but it is not clear from the text whether this is actually the same model architecture, and how exactly it was trained. To have an objective baseline, I suggest to pre-train the same architecture using the two-stage pre-training pipeline (section 3.4) twice, with the only difference being that $\\beta_{\\text{MPR}}$ and $\\beta_{\\text{Contrast}}$ are set to zero for one of the models. This way, the only difference is whether spectral information is used or not, allowing a direct assessment of the effectiveness of including this information.\n\n- The manuscript applies MolSpectra only to a single architecture for structural representation learning (TorchMD-Net). It is therefore difficult to judge whether MolSpectra is generally effective, or whether its usefulness is strongly dependent on the underlying architecture used for structural representation learning. To address this shortcoming, the authors should apply MolSpectra to pre-training other architectures (ideally using the method described in my previous point to establish objective baselines for each architecture). This would allow readers to assess MolSpectra in a broader context and would significantly strengthen the paper.\n\n- The authors state that \"MolSpectra learns the dynamic evolution of molecules by understanding energy level transition patterns\", however, this statement is not supported by direct evidence. I think it is a valid hypothesis, but it should be tested explicitly. Fortunately, a very direct test is possible: As the authors correctly state, the denoising objective is equivalent to learning a force field. This means that models trained with/without MolSpectra on a denoising task can be directly used as a force field to run molecular dynamics (MD) simulations. From such MD simulations, it is trivial to extract the power spectrum of a molecule via the velocity autocorrelation function (see e.g. https://doi.org/10.1039/C3CP44302G if the underlying theory is not familiar). The power spectrum contains the same peaks as the IR and Raman spectra, with the only differences being that (1) all internal vibrations are active (in contrast to IR/Raman spectra, where only some vibrations are visible - the power spectrum contains peaks from both!) and (2) the peak intensities are different. The peak positions, however, are directly comparable. If the spectral information actually teaches a model about the dynamics of a molecule, I would expect the power spectrum of a model trained with MolSpectra to show much better agreement (in peak positions) to the \"ground-truth\" IR/Raman spectra. The authors should perform this test, as it would make the paper much more insightful.\n\n- In section 2.2, the authors describe three different energy functions that can be used for pre-training. It is not immediately clear from the text which of these is actually used for MolSpectra. From context, I assume it is variant I, but I think stating this explicitly would make it easier to understand the details of the method.\n\n- The paper lacks an analysis of the computational complexity and resource requirements of the proposed method. A comparison of training time and resource usage with baseline methods would be beneficial.\n\n- While the ablation study provides insights into the importance of the MPR objective, a more comprehensive ablation study is needed to assess the individual contributions of different spectral modalities (UV-Vis, IR, Raman). This would provide a deeper understanding of how each component contributes to the overall performance.\n\n- The paper mainly compares against methods that rely on 3D structure information only. Comparison with other multimodal methods for molecular representation learning would provide a more complete picture of MolSpectra's performance relative to other methods." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose incorporating molecular spectra into the pre-training of 3D molecular representations, thereby infusing the knowledge of quantum mechanical principles into the representations." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024molspectra,\ntitle={MolSpectra: Pre-training 3D Molecular Representation with Multi-modal Energy Spectra},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xJDxVDG3x2},\nnote={under review}\n}" }, "abstract": { "value": "Establishing the relationship between 3D structures and the energy states of molecular systems has proven to be a promising approach for learning 3D molecular representations. However, existing methods are limited to modeling the molecular energy states from classical mechanics. This limitation results in a significant oversight of quantum mechanical effects, such as quantized (discrete) energy level structures, which offer a more accurate estimation of molecular energy and can be experimentally measured through energy spectra. In this paper, we propose to utilize the energy spectra to enhance the pre-training of 3D molecular representations (MolSpectra), thereby infusing the knowledge of quantum mechanics into the molecular representations. Specifically, we propose SpecFormer, a multi-spectrum encoder for encoding molecular spectra via masked patch reconstruction. By further aligning outputs from the 3D encoder and spectrum encoder using a contrastive objective, we enhance the 3D encoder's understanding of molecules. Evaluations on public benchmarks reveal that our pre-trained representations surpass existing methods in predicting molecular properties and modeling molecular dynamics, with an average performance improvements of 6.46%." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "3D molecular representation learning", "molecular spectra", "pre-training" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c08a6ac85ee67dc1cb4c169ad533ab33162d4b4c.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "MolSpectra: Pre-training 3D Molecular Representation with Multi-modal Energy Spectra" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xJUZHhrh3N
BiVWAC: Improving deep reinforcement learning algorithms using Bias-Variance Weighted Actor-Critic
main
Active
Reinforcement Learning;Bias;Variance;Actor-Critic;Deep Reinforcement Learning;SAC;PPO;AVEC;Mujoco
reinforcement learning
3;3;3
4;3;4
3;2;3
2;2;2
2;2;1
3
3.666667
2.666667
2
1.666667
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- I suggest to revise the whole manuscript and check the consistency of the arguments and the provided theoretical/experimental results." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Bias-variance trade-off is a fundamental topic in learning theories. In particular, variance reduction techniques are also important methods in policy gradient based reinforcement learning methods. Theoretical analyses and practical algorithms in this direction have a large group of potential audiences (significance).\n- Theoretical results look correct and sound (quality).\n- Experimental results indicate the potential effectiveness of variance reduction methods based on bias-variance decomposition (quality, significance)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes BiVWAC, a weighted sum of standard MSE and AVEC (Actor with Variance Estimated Critic) loss, which is derived by applying bias-variance decomposition to critic's residual error. It is experimentally shown that BiVWAC improves the performance of SAC and PPO, if the weight of MSE and AVEC losses are appropriately chosen. It is also shown that, an unbiased policy gradient estimator is also constructed for critics learned from BiVWAC loss, though the experimental results indicated that the uncorrected biased estimators likely perform well in practice." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Inconsistency of the theory and the experiment.\n - Throughout the exposition of the theoretical arguments, the \"true\" target is un-regularized value $Q^{\\pi}$ (or $V^{\\pi}$). Theorem 3 also states that $g_\\phi$ can be used to construct an unbiased estimate of conventional un-regularized policy gradient.\n - On the other hand, the base algorithms in the experiments are SAC and PPO, both of which do not align these un-regularized argument.\n - For SAC, the true target is the entropy regularized value, $Q_{\\tau}^{\\pi} = R(s,a) + \\gamma \\sum_{s',a'} P(s'|s,a)\\pi(a'|s') (Q_{\\tau}^{\\pi}(s',a') - \\tau \\log \\pi(a'|s'))$. In addition, the actor loss is a KL divergence between the parameterized policy and the energy based policy induced by the regularized value.\n - For PPO, the actor loss incorporates clipped log-ratio values to implicitly apply trust-region constraint.\n - Therefore, the theoretical results obtained in this paper does not explain the experimental behaviors of BiVWAC-SAC and BiVWAC-PPO.\n\n- A rather minor concern is that, the quality of writing seems not satisfactory.\n - The expositions are not consistent in some parts.\n - Section 2.2, the author explored to which quantities Lemma 2.1 should be applied, and states that \"As the policy gradient $\\nabla_{\\theta} J$ directly reflects our objective of maximizing $J$, it is the best candidate\". However, Lemma 2.1 is not directly applied to $\\nabla J $ but to the critic's residual error.\n - In L.118-119, it is stated that \"In this work we limit our scope to policies which can be represented by Gaussian distributions\". However, I found no gaussian requirements in the theoretical arguments.\n - Followings are the flaws in the writing that I noticed.\n - Italic and normal characters are mixed up for MSE, Bias and Var. it is recommended to use either of them consistently.\n - L.239/240: to be to be\n - the sentence after Eq. (8) lacks a period.\n - L.318: $\\delta_{\\rm BiVWAC}$ is used without clear definition.\n - L.414: Figure 2 -> Figure 1?\n - L.685; last one must be MSE_aloha(z,z-)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Why are the experiments in Fig. 3 l.h.s. limited to three environments?\n- How significant is the batch size in computing the empirical bias terms?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper investigates an interesting issue that is not well-understood in the community, especially when TD-learning is combined with neural function approximation.\n- The proposed method, from what I can see, is applicable to a wide range of algorithms, as the number of algorithms emloying the MSE in TD learning is large.\n- The experiments involve a relatively large amount of seeds and involve a wide range of values o\nfr $\\alpha$." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes an algorithm in which the bias and the variance components of the mean squared error in temporal difference learning can be traded off in a flexbible way. For this, the authors derive a loss function based on the convex combination of the bias-variance terms in the MSE. The authors conduct experiments on the Mujoco suite for various interpolation values and show a strong dependency on this parameter. Consequently, the authors suggest that appropriate tuning of this trade-off parameter can benefit learning. The authors moreover provide a reference value that works reasonably well on a range of environments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- To me, the main weakness of this paper in its current form is its presentation. Despite being a fairly straightforward reformulation of the mean squared error, I found the motivation and derivations hard to follow. For example, the equations in lines 64, 144, 180, 190, 229, and 246 all serve to arrive at the loss given in eq. 6 but in a rather cumbersome way, introducing several variables and notation that are not defined explicitly. I suggest that the derivation of Eq. 6 could be significantly more concise by showing that $\\mathcal{L}_{avec}$ is in fact the variance term of a bias-variance decomposition of the MSE and that one can recover the MSE by adding to this the bias term with equal weighting. \n\n- Conceptually, I am unsure whether this approach should be thought of as an \"interpolation\" between AVEC and MSE in the sense that AVEC is a special case of $\\alpha=0$. There is a significant difference, in my opinion, between weighing the components in line 260 and 262 differently or having $\\alpha=0$ (AVEC) and $\\alpha=1$. This is because AVEC changes the minimizer of the objective function, whereas the shown approach mainly changes the weighting of gradients. For example, AVEC is not a sensible algorithm without the correction term, wheres one can argue that any value of $\\alpha>0$ and $\\alpha<1$ still shares the same minimizer as\nthe MSE (assuming sufficient expressivity and continuous state-spaces).\n\n- The experimental results seem to speak to the above points, in that $\\alpha=0$ is a significantly different algorithm. There are moreover a few experimental results that I find concerning:\n - In Fig. 3, several versions with $\\alpha=0$ (AVEC) perform worse with the gradient correction. In my mind this is a highly counterintuitive result. In my understanding, the objective function of AVEC has no reason to provide accurate gradients without the correction term, so it is highly surprising to see higher performance without it. \n - I find Fig. 4 a very difficult to interpret plot. For example, I don't follow the authors suggestion that the trend of the curve is meaningful. The sign of the shown curves seems like a more relevant metric: For example, all curves on Ant-v4 l.h.s. seem to indicate that bias, variance, and MSE are reduced in the alpha version. This, however, does not really corroborate the trade-off nature suggested by the authors. But what I find more concerning is that I expected all curves to cross 0 at alpha=0.5, as the authors suggest that the loss equals the MSE for this value. Most curves, however, stray significantly far away from 0. The above points, in my opinion, cast serious doubt as to whether the bias-variance trade-off is the driving factor in the observed experimental results.\n\n- The related work section seems rather short. Several classical works (e.g. many works by Sutton, Singh, Bertsekas) discuss the bias-variance tradeoff at length in temporal difference methods, policy gradients, etc." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. For Table 1, auther did not report which $\\alpha$ used for the evaluation. Do you use the same $\\alpha$ for all envs or you use different?\n2. For Table 1, based on AVEC paper, the walker2d, AVEC-SAC is $4334 \\pm 128$ however in this paper, AVEC-SAC is $119 \\pm 488$. Additionally, the std value for Table 1 are all greater than AVEC paper. This may because of the gym version difference. However to avoid confusion and concerns, can auther perform another experiment using the same gym version as AVEC paper? (Just Walker2d is sufficent for the rebuttal)\n3. On page 9, line 478, there is a placeholder indicating \"Appendix ??,\" which appears to be a missing reference.\n4. In Figure 4, for SAC, the $\\alpha \\to 0$ result in larger bias and variance but in Figure 1, $\\alpha \\to 0$ result in best performance. Why controlling bias and variance does not inprove performance?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper introduce a novel $\\alpha$ to balance bias and variance.\n2. This papre provide some theoretical analysis." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper extend the AVEC framework by introducing a hyperparameter $\\alpha$ to control the balance between bias and variance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. This paper addresses the issue of bias between the function approximation $\\bar{Q^\\pi}$ and its target $y$. My understanding is that this approach aims to mitigate the challenge that the true value of the function is unknown. However, as shown in [1], the TD error is an inadequate substitute for the true value error, which can significantly impact RL performance. Although controlling the bias between the function approximation and its target is possible, the value error may still remain substantial. Could the author provide some theoretical analysis on this value error aspect?\n2. Although the $\\alpha$ is a novel contribution, the result does not reflect on this. Based on Figure 1, for SAC, $\\alpha = 0$ or $\\alpha$ close to 0 result in best performance. However if the $\\alpha = 0$, based on Equation 6, the loss function is exactly AVEC cost. \n\n\n\n[1] Fujimoto, Scott, et al. \"Why should i trust you, bellman? the bellman error is a poor replacement for value error.\" arXiv preprint arXiv:2201.12417 (2022)." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We study weightings of bias-variance in the critic loss to improve actor-critic performances" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024bivwac,\ntitle={Bi{VWAC}: Improving deep reinforcement learning algorithms using Bias-Variance Weighted Actor-Critic},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xJUZHhrh3N},\nnote={under review}\n}" }, "abstract": { "value": "We introduce $\\textrm{\\textbf{Bi}as-\\textbf{V}ariance \\textbf{W}eighted \\textbf{A}ctor \\textbf{C}ritic (\\textbf{BiVWAC})}$, a modification scheme for actor-critic algorithms allowing control over the bias-variance weighting in the critic. In actor-critic algorithms, the critic loss is the Mean Squared Error (MSE). The MSE may be decomposed in terms of bias and variance. Based on this decomposition, BiVWAC constructs a new critic loss, through a hyperparameter $\\alpha$, to weigh bias vs variance. MSE and Actor with Variance Estimated Critic (AVEC, which only considers the variance in the MSE decomposition) are special cases of this weighting for $\\alpha=0.5$ and $\\alpha=0$ respectively. We demonstrate the theoretical consistency of our new critic loss and measure its performance on a set of tasks. We also study value estimation and gradient estimation capabilities of BiVWAC to understand the means by which BiVWAC impacts performance.\n We show experimentally that the MSE is suboptimal as a critic loss when compared to other $\\alpha$ values. We equip SAC and PPO with the BiVWAC loss to obtain BiVWAC-SAC and BiVWAC-PPO and we propose a safe $\\alpha$ value, $\\alpha^*$, for which BiVWAC-SAC is better than or equal to SAC in all studied tasks but one in terms of policy performance. We also point out that BiVWAC introduces minimal changes to the algorithms and virtually no additional computational cost. \n In addition we also present a method to compare the impact of critic modifications between algorithms in a sound manner." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Reinforcement Learning", "Bias", "Variance", "Actor-Critic", "Deep Reinforcement Learning", "SAC", "PPO", "AVEC", "Mujoco" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/0fea9c6bd81413a57fe8a346349777da335d2a6e.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "BiVWAC: Improving deep reinforcement learning algorithms using Bias-Variance Weighted Actor-Critic" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xJXq6FkqEw
Enhancing Uncertainty Estimation and Interpretability with Bayesian Non-negative Decision Layer
main
Active
Factor Analysis;Uncertainty Estimation;explainable AI
interpretability and explainable AI
5;6;6;8
2;3;3;3
2;3;3;4
2;2;3;3
2;3;3;2
6.25
2.75
3
2.5
2.5
0.662266
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- How do we know the distinction between epistemic and aleatoric uncertainty? The epistemic uncertainty should go to zero for large data limit. Is this the case in the present modeling? \n- In Sec. \"uncertainty evaluation metric\", how are the various n_ac, n_au, n_ic, n_iu actually computed?\n- Eq. 9: what is the difference between f_NNand f_lambda?\n- in section 3.2, the switch from Gaussian to Gamma distribution is unclear. Is the Gaussian distribution used in this work? It seems not but the sentence \"Both θ and Φ are sampled from a Gaussian distribution\" points otherwise. \n- in sparsity measurement a threshold of 10^-5 is defined for the weights. Shouldn't this come from the analysis of the distribution of weights, rather than just providing a number?\n-in table 1, it seems the application of BNDL to ResNet introduces a significantly better improvement than when applied to ViT. Can the reason of this be understood? \n\nTypos:\n- row 178, \"uncertainty-refers\" should not have a hyphen\n- 282 Killback–Leibler \n- 345 \"descirbed\"\n- 355 \"we uses\" \n- 371 \"Perforamce\"" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "The idea is rigorous and the implementation introduces a minimal overhead over an existing network." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The manuscript introduces a Bayesian Nonnegative Decision Layer (BNDL) for deep neural network classifiers, with the intent of reformulating them as a factor analysis. This is shown to enhance the interpretability and uncertainty-estimation capabilities of the networks, at least on the examined datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The presentation should be improved, see questions" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Q1: The authors provide a theoretical complexity analysis. What is the practical runtime increase compared to a simple fully-connected layer?\n- Q2: Can the authors provide greater detail on their relatio nto Wang et al. (2024)? There is a strong relation in the method (NMF) and aim to improve interpretability, yet within this paper it is only ever mentioned in passing without being fully introduced nor compared against. The same holds, e.g., for Duan et al. (2024)." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The method is generic enough to be added to an arbitrary deep architecture\n- It performs well under varying experimental settings and architectures and can keep/improve upon its non-interpretable counterpart while greatly improving interpretability \n\n\nOne caveat that should be noted is that I am not too familiar with the current state of the art in interpretability research. As presented, the results look significant, but they might not be." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors focus on improving the interpretability of deterministic neural net, by inserting a probabilistic layer that provides a non-negative factorization for an interpretable classification.\nThey prove (partial) identifiability and evaluate the method on several image classification benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The experiments are limited to a small set of interpretability baselines. \n- Sec 3.1 \"We first adopt a Bayesian perspective to re-examine the DNNs\". This framing is rather crude. The vague fact that one could interpret the input to a softmax as a delta distribution over a latent variable alone is not enough to call something Bayesian. A Bayesian approach requires a well-specified prior combined with a posterior inference. Simply making a model (indirectly) probabilistic is not Bayesian.\n- l218 \"it lacks reparameterization and cannot be optimized\" \nYou can always rely on what is often known as the REINFORCE approach, i.e., $\\nabla_a E_{x\\sim q_a(x)}[f(x)] = E_{x \\sim q_a(x)}[f(x)\\nabla_a \\log q_a(x)]$. However, you usually don't want to do this as it will punish your gradients with a huge variance, which is why your proposal is much more stable and sane. But as long as you have a density you could in theory do it. \n- (12) the left hand side should be p(Y|X), as you marginalize on the rhs over $\\theta$ and $\\Phi$\n- Given the Bayesian framing of the paper a short discussion or mention of what is known as last-layer BNNs is missing. These combine a deterministic network trunk with a Bayesian inference over the last layer of a neural net. See, e.g., the references in Harrison et al. (2024). (This is not necessarily the best reference for this research direction that has been growing recently, but one whose references can serve as guidance for a more generic reference.) \n- In a similar direction goes the field of evidential deep learning also known as prior networks, where a prior to the last layer is inserted in a different way. See, e.g., Sensoy et al. (2018) for classification or Amini et al. (2020) and Malinin et al. (2020) for regression. Both research directions, i.e., EDL and LL-BNNs have a different aim than the authors' proposal but rely on similar mechanics.\n- Regarding overconfidence in l167, I would have expected a reference to the first main study in that direction by Guo et al. (2017). (At least to my knowledge.)\n- In Thm 1 $e_{(k)}$ is not introduced\n\n_____\n_Amini et al., Deep Evidential Regression (2020)_ \n_Guo et al., On Calibration of Modern Neural Networks (2017)_ \n_Harrison et al., Variational Bayesian Last Layers (2024)_ \n_Malinin et al., Regression prior networks (2020)_ \n_Sensoy et al., Evidential Deep Learning to Quantify Classification Uncertainty (2018)_ \n\n\n\n\n\n### Typos\nThe paper contains a lot of typos and missing articles, which should be fixed in a thorough proofreading round. A subset of these are\n- l81 Furthermore, we provide\n- l105,l107 (and maybe others) the citation style is broken use \\citep and \\citet correctly, please follow the style guide\n- most equations, e.g., (1), (2) lack proper punctuation\n- l206 $\\mathbb{R}_+$\n- l241 \"where $h_j$ is an extracted feature\n- l272 of the log-likelihood\n- l345 are described in\n- l440 misclassification, e.g., in the\n- A lot of references are broken, e.g., Dosovitskiy et al. (2020) is a published paper, so is Kingma & Ba's Adam etc." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The authors should respond to my questions in the Weaknesses section. I have no further questions, although I want to note that there are a fair number of typos in the paper that the authors should clean up.\n\nE.g. I believe the first term inside the integral of Eq (4) should be $p(y|\\Phi, \\Theta)$?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The methodology and description of the model and inference process is soundly written. Modeling as the factorization matrices as Gamma distributions should, in theory, encourage sparsity. The variational inference process that is described makes sense. \n\n2. Section 4 is an interesting addition to the paper, which describes how the factorization matrices are (partially) identifiable under certain assumptions, and how the author's model satisfies such criteria.\n\n3. I appreciate the sanity testing for uncertainty/accuracy correlation in Section 5.1.1." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a Bayesian neural network where the final layer is modeled as a non-negative matrix factorization (NMF), i.e. $y \\sim \\Phi\\Theta$, the motivation being that such a model would provide both predictive uncertainty estimates (because we learn a posterior distribution) and interpretability (because we learn a sparse factorization for $y$). Both $\\Phi$ and $\\Theta$ are modeled as Gamma distributions, and approximated variationally using the Weibull distribution. The authors evaluate their model (on accuracy, uncertainty, and sparsity) on CIFAR and ImageNet." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My main criticisms would relate to the experimental results:\n\n1. It is not clear to me why it is necessary to compare to non-Bayesian/point-estimate models, considering that the goal of the paper is to provide better uncertainty estimates. As such, I am not sure the ViT results are especially meaningful, i.e. it is unclear to me what I should be comparing ViT-BNDL to.\n\n2. From Table 1, the PAvPU numbers for the ResNet model do not seem to be a huge improvement over the competing methods, especially the recent approaches (BM and CARD). \n\n3. Is there a reason why sparsity values are not shown for the competing approaches in Section 5.1.2?\n\n4. It is not clear to me that the interpretability evaluation metric in Section 5.2 is correct or useful. Specifically, why is unsupervised disentanglement important for ImageNet and CIFAR? Disentanglement does not imply interpretability, and disentangled features are not necessarily the correct ones to learn either (e.g. a spurious feature will be disentangled from a salient feature, but it doesn't mean we want to learn the former).\n\n5. Relatedly, why do we not compare to competing approaches in Section 5.2? Would the authors be able to report the metric in Table 2 and the visualizations in Figure 4 for the models that BNDL was compared to earlier?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The results on both uncertainty and disentanglement use one metric only each. Would it be possible include further metrics?\n\nMore verbal details on exactly how all experiments were performed would be appreciated." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Suggested BDNL seems advantageous for uncertainty evaluation and disentangled representation learning.\n\nThe authors try to perform theoretical analysis of their method.\n\nFor most paragraphs ZeroGPT score was 0%, for some 4% and 8%. Thus rather human-written." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The manuscript suggests using Bayesian non-negative decision layer for improving model's uncertainty evaluation and sparsity (disentanglement), with no (statistically significant) loss of accuracy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Literature overview: why you do not cite works on DNNs and non-negative factor analysis in the interpretability framework, these are probably the closest works to your manuscript and constitute the core of your work? E.g.:\nhttps://proceedings.neurips.cc/paper_files/paper/2022/hash/e53280d73dd5389e820f4a6250365b0e-Abstract-Conference.html\n\nTheorem 1 is not a theorem (please check any statistical/ML literature, like AoS or NeurIPS for what is a theorem), neither is its \"proof\" is a proof, this is just a discussion. I would suggest you change the presentation.\n\nThe improvement of the performance does not seem to be significant at all, which is Ok, but the PAvPU might still seem questionable. How many receptions have been performed, what are your p-values? How, e.g., these numbers in Table 1 with +/- in front were calculated?\n\nThere are typos in the manuscript (which means it is human-written)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024enhancing,\ntitle={Enhancing Uncertainty Estimation and Interpretability with Bayesian Non-negative Decision Layer},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xJXq6FkqEw},\nnote={under review}\n}" }, "abstract": { "value": "Although deep neural networks have demonstrated significant success due to their\npowerful expressiveness, most models struggle to meet practical requirements for\nuncertainty estimation. Concurrently, the entangled nature of deep neural net-\nworks leads to a multifaceted problem, where various localized explanation tech-\nniques reveal that multiple unrelated features influence the decisions, thereby un-\ndermining interpretability. To address these challenges, we develop a Bayesian\nNonnegative Decision Layer (BNDL), which reformulates deep neural networks\nas a conditional Bayesian non-negative factor analysis. By leveraging stochastic\nlatent variables, the BNDL can model complex dependencies and provide robust\nuncertainty estimation. Moreover, the sparsity and non-negativity of the latent\nvariables encourage the model to learn disentangled representations and decision\nlayers, thereby improving interpretability. We also offer theoretical guarantees\nthat BNDL can achieve effective disentangled learning. In addition, we developed\na corresponding variational inference method utilizing a Weibull variational in-\nference network to approximate the posterior distribution of the latent variables.\nOur experimental results demonstrate that with enhanced disentanglement capa-\nbilities, BNDL not only improves the model’s accuracy but also provides reliable\nuncertainty estimation and improved interpretability." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Factor Analysis", "Uncertainty Estimation", "explainable AI" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1a5c013c5ba99eb7661d84fe0ca49c911b6db609.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/96115f35cd0ca853d328f4eda993ae55475c93f9.zip" }, "title": { "value": "Enhancing Uncertainty Estimation and Interpretability with Bayesian Non-negative Decision Layer" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xJc3PazBwS
Disentangling Textual and Acoustic Features of Neural Speech Representations
main
Active
Disentangling Representations;Spoken language Processing;Speech Emotion Recognition;Interpretability
applications to computer vision, audio, language, and other modalities
3;3;3;5
4;4;3;4
3;3;2;2
2;2;2;2
3;2;3;3
3.5
3.75
2.5
2
2.75
0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. L30 mentions that Whisper has highly entangled representations, in the same list as HuBERT or Wav2Vec2. It is never mentioned/evaluated later; is there any evidence that it is likely to have as entangled representations as SSL models from this list? It is trained purely in a supervised way for text transcription/translation, iirc, hence I would assume it learns purely text-focused features.\n\n2. Do I understand it correctly that non-standard splits of LibriSpeech are used for the purpose of \"ensuring an equal representation of gender and speaker ID\" (S3.3) Is there a strong reason for that in the text transcription tasks? For the sake of comparability with all the existing literature, I would advise using standard some dev/test-{clean,other} splits.\n\n3. Having a single linear probing classifier gets WER of ~50 for W2V2 and HuBERT. Only the pre-finetuned models get reasonable error rates. Is this a good evaluation setup to draw conclusions from?\n\n4. What dataset is used to calculate WER in Table 1? Is this a mix of LibriSpeech and CommonVoice? Those are very different datasets, it would make sense to report them separately.\n\n5. At least a part of motivation of the work is that by using disentangled representations one can be confident that model is using the features it is allowed to. For instance, the model doesn't rely on leaking gender or voice information when making text-based decisions. I generally get the idea, but the transcription vs gender/emotion classification task split is not a particularly convincing combination. If we are worried that the model uses something beyond the text content when making some downstream decisions, we can replace it with an (ASR + text classifier) model. Can we think of a more convincing scenario?\n\n6. Do we actually need VIB? How different it would be if we used the labels to train a combination of ASR, Speaker and Emotion classifiers and used their outputs?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* I find the \"disentanglement evaluation\" part pretty convincing." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Many standard speech representations are learned in a self-supervised way (HuBERT, w2v2, etc) and hence are, essentially, entangled blackboxes that have acoustic and textual features mixed in them in an arbitrary way. One can imagine scenarios where this is undesired, and it would be better to have a control over what/how features are encoded. This paper proposes a method building such disentangled representations, using the IB principle. As a running example, the paper opposes information that encodes textual content and acoustic information, encoding emotion or speaker identity. The paper shows that their method successfully disentangles the inputs features (variants of HuBERT, W2V2). The authors conclude with several interpretability studies of the models that use those features." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The proposed method assumes that we have labeled tasks for all potential downstream tasks. So one starts with general-purpose self-supervised representations such as HuBERT -- which are entangled -- and ends up with representations that are (a) disentangled, but (b) are likely only useful for the tasks where we have labels. In this scenario, I am not entirely convinced that one has to use VIB (see the questions below).\n\n* I am not entirely convinced by the motivations of the paper. If one needs to be confident that models do not use non-textual information while taking decisions, they can train models to make those decisions using pure transcripts. This is a simple baseline solution the paper should be having in mind.\n\n* There are some concerns wrt the experimental setup -- see Questions 2, 3, 4." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Why subsample Librispeech and Common Voice so heavily for the transcription task? Librispeech contains 960h of transcribed audio, but this approach uses less than 20.\n\nHow important is the ordering of the tasks? Would the performance be identical if Emotion or Speaker Id were stage 1 and Transcription was stage 2?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The described approach is sensible and its specifics are clearly described.\n\nThere are a number of interesting analyses based on probing experiments to attempt to identify what information is still available in different layers of the network and assessment of the information related to distinct tasks in different frames of the input audio." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper describes an application of information bottleneck training to isolate aspects of a speech representation. The description of the approach is quite clear. The paper includes a variety of analyses of the learned representations that show that disentangling is achieved." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The motivations in the abstract and conclusion are not well connected to the modeling and analysis. E.g. one motivating application is to minimize the privacy risk from encoder representations. This hasn't been assessed in the model or paper.\n\nThe disentangling approach is based on supervised tasks. The contributions necessary for emotion classification or speaker id. It is unclear how these learned representations would transfer to some new task. Would this approach need to be extended to a \"stage 3 training process?\n\nMultiple training stages incur additional complexity. It would be interesting to see if these multiple objectives would be included into a single stage training.\n\nThe impact on performance in Table 1 does not deliver a consistent message. The Transcription show substantial regressions in both of the FT representations. The improvements to Emotion and Speaker Id are stronger but more consistent on the large sized models while on the Base sizes, there are regressions on the wave2vec variants. This sensitivity to SSL objective and model size suggests that this approach may not be robust to new tasks or architectures." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What is the reason for using the mixture of LibriSpeech and Common Voice?\n2. The WER reported in Table 1. seems to be higher than expected. What is the possible reason for that? For LibriSpeech, what subset is used in the experiments? LibriSpeech-clean or LibriSpeech-other?\n3. The Information Bottleneck (IB) method focuses on retaining only information that’s relevant for predicting the target variable, filtering out anything unnecessary. This makes it dataset-dependent. For instance, when I train the stage 2 framework on a dataset for emotion recognition, the disentangled features capture emotional information but lacks speaker-specific information. I wonder if it would be possible to handle both speaker recognition and emotion recognition in stage 2, so that we preserve both emotion-related and speaker-related information. Alternatively, we could consider adding a stage 3 focused on speaker identity, while stage 2 remains dedicated to emotion recognition." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is well-written and is easy to follow.\n2. The proposed approach is easy to use.\n3. Experiments in Section 6 align in part with the findings of previous work on layer-wise speech SSL models[1], reflecting the effectiveness of the proposed method.\n\n\n[1] A. Pasad, B. Shi and K. Livescu, \"Comparative Layer-Wise Analysis of Self-Supervised Speech Models,\" ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)," }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper uses the Variational Information Bottleneck framework to separate textual and acoustic features of representations from SSL speech models, such as HuBERT and wav2vec2. This approach involves two stages: first, it isolates textual information by training models to transcribe content with minimized other unrelated information. The second stage targets acoustic features for tasks like emotion and speaker recognition.\nThey validate the proposed method through experiments on ASR, emotion recognition and speaker identification, showing its effectiveness in distinguishing between acoustic and textual attributes. This approach also has potential applications in privacy preservation, where disentangling speaker identity from transcription could help secure ASR systems." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. There is previous work using the Information Bottleneck for feature disentanglement, such as in [2] and [3]. It would be better to cite these studies and highlight the distinctions between this paper and prior work.\n2. Experiments comparing the proposed method with existing approaches are lacking. As there are lots of works for speech representation disentanglement like AutoVC, SpeechSplit, or FAcodec[4] , it would strengthen the paper to report the performance of at least one existing methods.\n3. In Table 1, VIB loses essential information for textual representation, resulting in a much higher WER compared to Probing for HuBERT-FT and Wav2Vec2-FT. Training on a different dataset with positive outcome might help alleviate this issue.\n\n\n[2] Pan, Z., Niu, L., Zhang, J., & Zhang, L. (2021). “Disentangled Information Bottleneck.” *Proceedings of the AAAI Conference on Artificial Intelligence*\n\n[3] Gege Gao, Huaibo Huang, Chaoyou Fu, Zhaoyang Li, Ran He; “Information Bottleneck Disentanglement for Identity Swapping.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 3404-3413\n\n[4] Ju, Zeqian, et al. \"Naturalspeech 3: Zero-shot speech synthesis with factorized codec and diffusion models.\" arXiv preprint arXiv:2403.03100 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "NA" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The main strengths of the paper are as follows:\n1. The authors provide a clear motivation and explanation for the problem under consideration.\n2. The method is clearly explained, creating no confusion in grasping the idea. \n3. The experiment section is well-written with relevant experiments\n4. The authors answer some key questions related to the work such as extent of disentanglement and its benefits\n5. The last section of the paper talks about prior works which are in the same domain to provide readers an idea about the novelty in this work. \n6. The authors have further cited rsome extremely elevant works." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new framework for disentangling speech representations from neural speech models (like Wav2Vec2 and HuBERT) into two distinct components: textual content (what can be transcribed as text) and acoustic features (like emotion or speaker identity). This separation is important because neural speech models typically create deeply entangled internal representations that combine various features, making it difficult to isolate specific information or suppress potentially sensitive acoustic features (such as gender or speaker identity) in real-world applications.\nThe authors present a two-stage training framework based on the Variational Information Bottleneck technique. In the first stage, a decoder is trained to map speech representations to text while minimizing irrelevant information from input, ensuring only features necessary for transcription are preserved. In the second stage, another decoder is trained that has access to the textual representation from previous stage and is trained to predict target labels for downstream task while minimizing information encoding. They evaluated their framework on emotion recognition and speaker identification tasks, demonstrating that the resulting representations were effectively disentangled - the textual representations could predict transcriptions but performed randomly when predicting acoustic features, while acoustic representations showed the opposite pattern.\nThe authors also analyzed how different layers of pre-trained and fine-tuned Wav2Vec2 models contribute to emotion recognition. They found that in models fine-tuned for automatic speech recognition (ASR), the acoustic contribution to emotion recognition decreases in higher layers while the textual contribution increases. Additionally, they showed that their framework can serve as a feature attribution method to identify the most significant frame representations for a given task, distinguishing between textual and acoustic contributions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Here are the main weaknesses:\n1. I struggle to understand the new idea in this work because the VIB technique has existed for a while.\n2. The concept of employing neural networks to learn or estimate bounds on Mutual information has existed for a long time (see \n a. MINE: Mutual Information Neural Estimation by Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeswar, Sherjil Ozair, Yoshua \n Bengio, Aaron Courville, R Devon Hjelm. \n b. DEEP VARIATIONAL INFORMATION BOTTLENECK by Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, Kevin Murphy\n c. Representation Learning with Contrastive Predictive Coding by Aaron van den Oord, Yazhe Li, Oriol Vinyals\n3. The authors do not provide explanation in Table 1 regarding why WER increase for Fine-tuned models after disentanglement training. \n4. In Figure 2, there seems to be some strange behavior as far as prosody prediction is concerned. Pitch, intensity, rhythm, voice quality, etc have been identified as key contributors to the perception of emotion from speech. It makes little sense as to why the disentangled acoustic representation would remove that information. \n5. It has been shown before that different layers of Self-supervised models (HuBERT and W2V2) learn different types of representation from speech signal (acoustic, prosody and semantic). Therefore, section 6 reaffirms those prior studies while providing no new information for the infromed readers." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024disentangling,\ntitle={Disentangling Textual and Acoustic Features of Neural Speech Representations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xJc3PazBwS},\nnote={under review}\n}" }, "abstract": { "value": "Neural speech models build deeply entangled internal representations, which capture a variety of features (e.g., fundamental frequency, loudness, syntactic category, or semantic content of a word) in a distributed encoding. This complexity makes it difficult to track the extent to which such representations rely on textual and acoustic information, or to suppress the encoding of acoustic features that may pose privacy risks (e.g., gender or speaker identity) in critical, real-world applications. In this paper, we build upon the Information Bottleneck principle to propose a disentanglement framework that separates complex speech representations into two distinct components: one encoding content (i.e., what can be transcribed as text) and the other encoding acoustic features relevant to a given downstream task. We apply and evaluate our framework to emotion recognition and speaker identification downstream tasks, quantifying the contribution of textual and acoustic features at each model layer. Additionally, we explore the application of our disentanglement framework as an attribution method to identify the most salient speech frame representations from both the textual and acoustic perspectives." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Disentangling Representations", "Spoken language Processing", "Speech Emotion Recognition", "Interpretability" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/6f712bfbfe7e06c6f54ec556e86c69d93a09140d.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/df11115da9c366e6fdc160dcd2dd20d80cf06051.zip" }, "title": { "value": "Disentangling Textual and Acoustic Features of Neural Speech Representations" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xJljiPE6dg
Language Models Learn to Mislead Humans via RLHF
main
Active
RLHF;reward hacking;human evaluation
alignment, fairness, safety, privacy, and societal considerations
5;6;6;8
4;4;3;3
2;3;3;4
2;4;3;4
3;4;3;4
6.25
3.5
3
3.25
3.5
-0.688247
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to weakness." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper introduces the concept of \"U-SOPHISTRY,\" wherein RLHF unintentionally enables language models to mislead human evaluators without necessarily improving task performance. This novel framing extends prior work on reward hacking and deception in AI, highlighting new risks in standard RLHF pipelines. With AI applications proliferating, ensuring safe and reliable human-AI interaction is critical.\n\n2. The method incorporate diverse and challenging tasks like question-answering and programming tasks.\n\n3. This paper clearly demonstrates the author's intent and contains numerous figures and tables." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper \"Language Models Learn to Mislead Humans via RLHF\" examines how language models fine-tuned with Reinforcement Learning from Human Feedback (RLHF) can unintentionally mislead humans by appearing correct even when wrong. Through experiments on question-answering (QuALITY) and programming tasks (APPS), the authors show that RLHF increases human approval of model responses but does not enhance accuracy, often causing humans to rate incorrect answers as correct. This phenomenon \"U-Sophistry\" suggests that RLHF can make language models more persuasive without improving true correctness, highlighting a significant challenge for model alignment." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The scenario discussed is somewhat perplexing: this paper argues that models trained with RLHF may become more deceptive towards humans without actual improvements in capability. However, RLHF’s effectiveness relies heavily on the choice of reward model and corresponding training data, so if there are issues in human-annotated data, such results are predictable. Thus, the reviewer suggests that the problem stems from humans no longer being able to provide sufficiently high-quality evaluations of the model’s outputs, resembling more of a “Weak-to-Strong” alignment issue, while the discussion in this paper seems to frame it as an issue with the RLHF pipeline itself.\n\n2. Lack of robust countermeasures to mitigate \"U-SOPHISTRY\".\n\n3. Many alignment algorithms have been optimized for the classic RLHF pipeline, such as DPO and KTO. Running different alignment algorithms may lead to varying results in the final model, yet the authors conducted experiments only with the original PPO algorithm." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* Have the authors considered testing their findings on other types of tasks beyond QA and programming?\n* Could the authors elaborate on potential mitigation strategies beyond probing?\n* How might the results differ with more or less experienced evaluators?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* Addresses a critical gap in understanding how language models might naturally learn to mislead humans\n* The experiment is well-designed with appropriate controls\n* First systematic study of unintended sophistry in RLHF" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies that LMs trained with standard RLHF techniques can learn to appear more convincing to human evaluators without actually improving their tasks. Through studies on question-answering and programming tasks, the authors show that RLHF-trained models achieve higher human approval ratings while making human evaluation less accurate." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* This paper only shows the experimental results on only two tasks (question-answering and programming). Without specific experiments, we may not know whether the method would generalize to other important domains where RLHF is used. \n* Figure 1 could benefit from more detailed captions\n* The related work section only covers RLHF literature and could expand discussion on human evaluation methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- Why do you think the LLM generates a less readable program?\n- Do you think that the increase in human error rate for programming task is because the more is generating less readable code or because it learns to pass the unit tests?\nI would not conclude that if humans spend more time on a task, they work harder. There could be several other factors that can influence time." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The work recruits humans and validates the hypothesis that models can learn to perform reward hacking. The human evaluations are well thought out.\n- They perform extensive evaluations and experiments for robustness. They also try methods that can detect I-Sophistry to detect U-Sophistory but find that the methods do not generalize.\n- The insights are impactful and should make researchers and industry practitioners give more thought to designing their RLHF training procedure. \n- The manuscript is well-written and easy to understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The work empirically validates the hypothesis that LLMs can learn to perform reward hacking, i.e., achieve higher rewards when RLHFed, despite not actually being better, according to an oracle. For the study, they recruited human evaluators and validated this hypothesis in two settings: QA and programming tasks, showing that after performing RLHF, human evaluators thought that the model performance improved despite the performance not improving. They also show that the false positive rate for human evaluators also increases. They also check if probing methods that can detect incorrect programs generated by models with backdoors don't generalize to RLHFed models where the model performs reward hacking, which is unintentional from the user's end." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I am not convinced by the design of the programming task used to validate the hypothesis. \n - Why do the authors choose the two simplest unit tests? How would things change if they used the two most difficult unit tests?\n - For the pilot study, how were the human evaluators incentivized? As a developer, I would write two unit tests. One is an easy case, and another is difficult or where programs usually fail. \n - In an ideal scenario for preference tuning for programming tasks, human annotators should only be used for stylistic improvements since we also have very strong methods for checking the correctness of models.\n\n- More studies need to be done on whether incentivizing humans according to correctness and confidence biases human evaluators to be less or more confident." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "The paper involves human subjects in its user study to evaluate the effects of RLHF on misleading human evaluations. This raises potential ethical considerations, particularly around consent, data privacy, and ensuring that the evaluators are fully informed about the study's purpose and potential biases they may encounter." }, "flag_for_ethics_review": { "value": [ "Yes, Responsible research practice (e.g., human subjects, data release)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Can the author answer all the questions above?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. The exploration of U-SOPHISTRY provides a novel perspective on RLHF challenges.\n2. The authors conduct experiments and rigorous user studies across two common tasks, measuring correctness, human approval, evaluation error rate, and false positive rate. The results provide deep insights into how RLHF impacts human judgment and evaluation.\n3. Through comprehensive qualitative analysis, the paper examines specific strategies used by LMs that lead to U-SOPHISTRY, such as fabricating evidence and exploiting human testing shortcuts, enhancing our understanding of how RLHF may influence model behavior.\n4. The paper is well-written and easy-to-follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates a phenomenon called \"U-SOPHISTRY\", in which language models (LMs) trained through reinforcement learning from human feedback (RLHF) unintentionally become better at convincing humans that they are right even when they are wrong without improving actual performance. The author demonstrates U-SOPHISTRY through user studies on question-answering (QuALITY) and programming (APPS) tasks. The results indicate that RLHF-optimized models convince human evaluators of incorrect outputs at higher rates compared to their pre-trained counterparts. Through a comprehensive qualitative analysis, the authors identify strategies that the RLHF models use to mislead evaluators, such as fabricating or selectively presenting evidence in QA tasks and generating partial code that passes all human-written tests or is less readable for debugging. Additionally, the authors emphasize that prior work on detecting I-SOPHISTRY (intentionally misleading) is not an effective benchmark for methods aimed at detecting U-SOPHISTRY." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I do not see major issues in the paper, though I have a few minor concerns and some areas where clarifications would be helpful.\n\n**Concerns**\n1. In the programming (APPS) experiment, the authors assume that users will provide test cases for code validation, which may not fully capture real-world scenarios. Users may also seek explanations from the model to better understand the code, especially if they don’t fully understand the initial question, which could result in their inability to write accurate test cases. It would be good to include a user study from this aspect.\n2. A follow-up question is whether all human evaluators fully understand the coding questions? Users may write inaccurate test cases and then incorrectly interpret execution results as passing. Additional analysis on the correctness of test cases written by human evaluators would be beneficial.\n\n**Clarifications**:\n1. Are the Human evaluation error rate and Human false positive rate calculations based on $R^{human}$ rather than $R^{train}$. The analysis appears to use real human evaluators ($R^{human}$), but the description on lines 181 and 183 uses $R^{train}$ instead of $R^{human}$.\n\n2. It would be clearer to display the distribution of total cases shown to human evaluators to indicate how many were the initial responses and how many were the RLHF responses." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We find LMs can unintendely learn to mislead real human evaluators on realistic tasks via RLHF." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024language,\ntitle={Language Models Learn to Mislead Humans via {RLHF}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xJljiPE6dg},\nnote={under review}\n}" }, "abstract": { "value": "Language models (LMs) can produce errors that are hard to detect for humans, especially when the task is complex.\nRLHF, the most popular post-training method, may exacerbate this problem: to achieve higher rewards, LMs might get better at convincing humans that they are right even when they are wrong. We study this phenomenon under a standard RLHF pipeline, calling it ``U-Sophistry'' since it is \\textbf{U}nintended by model developers. Specifically, we ask time-constrained (e.g., 3-10 minutes) human subjects to evaluate the correctness of model outputs and calculate humans' accuracy against gold labels. On a question-answering task (QuALITY) and programming task (APPS), RLHF makes LMs better at convincing our subjects but not at completing the task correctly. RLHF also makes the model harder to evaluate: our subjects' false positive rate increases by 24.1% on QuALITY and 18.3% on APPS.\nFinally, we show that probing, a state-of-the-art approach for detecting \\textbf{I}ntended Sophistry (e.g.~backdoored LMs), does not generalize to U-Sophistry. Our results highlight an important failure mode of RLHF and call for more research in assisting humans to align them." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "RLHF", "reward hacking", "human evaluation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2fe34d0184f094879c9041193132f3cf707aeedf.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/83e0af7e7df11514f20d02957de9b723a3800f91.zip" }, "title": { "value": "Language Models Learn to Mislead Humans via RLHF" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xJtWqVBZya
DrivingWorld: Constructing World Model for Autonomous Driving via Video GPT
main
Withdraw
world model;video generation
generative models
Xiaotao Hu;Wei Yin;Mingkai Jia;Junyuan Deng;Xiaoyang Guo;Qian Zhang;Xiaoxiao Long;Ping Tan
~Xiaotao_Hu1;~Wei_Yin2;~Mingkai_Jia1;~Junyuan_Deng1;~Xiaoyang_Guo1;~Qian_Zhang7;~Xiaoxiao_Long2;~Ping_Tan2
1;5;6;6
1;5;5;3
1;3;3;3
1;3;3;3
1;3;3;3
4.5
3.5
2.5
2.5
2.5
0.8044
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The idea of extending the GPT framework to video generation for autonomous driving is innovative and has the potential to significantly impact the field.\n2. The proposed spatial-temporal fusion mechanisms and the next-state-prediction strategy are well-thought-out and seem technically sound.\n3. The paper provides a thorough set of experiments demonstrating the model's capabilities, including comparisons with other state-of-the-art methods.\n4. The paper is well-structured, and easy to read." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents an approach to constructing a world model for autonomous driving using a GPT-style architecture, which is an application of autoregressive models in the visual domain. The authors claim that their model, DrivingWorld, is capable of high-fidelity, long-duration video generation with improved temporal coherence and controllability. The experiments and comparisons with existing methods are well-documented, and the paper is generally well-organized and clearly written." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Temporal-aware Vector Quantized Tokenizer has been extensively utilized in video generation fields, such as SORA, and thus does not constitute a technological contribution.\n2. It is not entirely clear how well the model generalizes to diverse driving scenarios beyond those seen in the training data. Additional experiments or analysis on how the model performs on out-of-distribution data would strengthen the paper.\n3. The paper could benefit from a discussion on the computational resources required for training and inference, especially considering the model's size. Discussing potential optimizations or trade-offs in computational efficiency could make the paper more appealing to practitioners.\n4. The paper missed some relevant works, such as \"VDT: General-purpose Video Diffusion Transformers via Mask Modeling\" (ICLR2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The paper contains inconsistent descriptions regarding the video generation frequency, stating 6Hz in the method section ( line 161 \"capable of extending predictions beyond 30 seconds at a frequency of 6Hz. \" ) but 5Hz in the experiment section (line 416 \"our model can generate up to 640 future frames at 5 Hz, resulting in 128-second videos with strong temporal consistency.\")" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper demonstrates clarity in its presentation and organization. The writing flows logically from motivation to implementation, while the figures illustrate key concepts and results. \n2. The paper's core innovation, applying temporal-aware GPT architecture to autonomous driving world modeling, represents an advancement in the field. \n3. The introduction of Dropout for Drifting-free Autoregression is a good solution to generating long-driving video sequences. Experiments show this technique addresses the common problem of quality degradation in long-term predictions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes DrivingWorld, a self-driving world model based on the GPT architecture for generating high-fidelity and long-term video sequence predictions. The model improves performance through three key innovations: Temporal-Aware Tokenization, Hybrid Token Prediction, and Long-time Controllable Strategies. Experiments show that the model can generate more than 100 seconds of high-quality video at a frequency of 5Hz. Compared with the traditional GPT structure, this method significantly reduces the computational cost by decoupling spatiotemporal processing while maintaining better temporal consistency and structural integrit" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The biggest concern lies in the model's controllability claims. While the paper demonstrates models of \"controllability\" with vehicle trajectory control, this is primarily restricted to lane changes on straight roads with minimal viewpoint variations. The surrounding environment, including other vehicles and road structures, remains purely autoregressive without direct control. \nBesides, the absence of more challenging scenarios like turning left/right in intersections raises questions about the model's true generalization capabilities and control flexibility.\n2. The generated videos still exhibit noticeable visual artifacts and physical inconsistencies. For instance, the black car in Figure 5's \"300s\" subfigure and the project webpage's Example 3 demonstrate unrealistic vehicle collision scenarios at the 14-second mark. \nThese issues highlight a considerable challenge: the model's inability to handle physically complex scenarios where vehicles are in close proximity or potential collision situations, which are crucial for autonomous driving applications.\n3. The method's long video generation is excellent, but the model's FID (16.4) and FVD (174.4) scores, while reasonable, do not lead the benchmark comparisons in Table 1. This becomes more apparent when considering recent SOTA works like DiVE and Vista, which are absent from the comparison." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 1 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "-" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "-" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "-" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper breaks the author's anonymity. I could find the full author list with their affiliations and personal web pages in two clicks, i.e., first click on the link in the abstract of the paper to go to the repo, then click on the \"index.html\" there. \nThis violates the statement given by the authors \"Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.\" thus I believe the paper has to be withdrawn.\n\nHere is the snippet of the index.html file containing the paper title and personal information of the authors:\n\n```\n <h1 class=\"title is-1 publication-title\"> <a style=\"color:#D46EE8\">DrivingWorld</a>: Constructing World Model for Autonomous Driving via Video GPT</h1>\n <!-- <div class=\"is-size-5 publication-authors\"> -->\n <!-- Paper authors -->\n <!-- <span class=\"author-block\">\n <a href=\"https://huxiaotaostasy.github.io/\" target=\"_blank\">Xiaotao Hu<sup>1,2,*</sup></a>,</span>\n <span class=\"author-block\"> -->\n\n <!-- <a href=\"https://yvanyin.net/\" target=\"_blank\">Wei Yin<sup>2,*</sup></a>,</span>\n <span class=\"author-block\">\n\n <a href=\"https://scholar.google.com/citations?user=fcpTdvcAAAAJ&hl=en&oi=ao\" target=\"_blank\">Mingkai Jia<sup>1,2</sup></a>,</span>\n <span class=\"author-block\">\n\n <a href=\"#\" target=\"_blank\">Junyuan Deng<sup>1,2</sup></a>,</span>\n <span class=\"author-block\">\n\n <a href=\"https://scholar.google.com/citations?user=CrK4w4UAAAAJ&hl=en&oi=ao\" target=\"_blank\">Xiaoyang Guo<sup>2</sup></a>,</span>\n <span class=\"author-block\">\n\n <a href=\"https://scholar.google.com/citations?hl=en&user=pCY-bikAAAAJ\" target=\"_blank\">Qian Zhang<sup>2</sup></a>,</span>\n <span class=\"author-block\">\n\n <a href=\"https://www.xxlong.site/\" target=\"_blank\">Xiaoxiao Long<sup>1,†</sup></a>,</span>\n <span class=\"author-block\">\n\n <a href=\"https://scholar.google.com/citations?user=XhyKVFMAAAAJ&hl=en&oi=ao\" target=\"_blank\">Ping Tan<sup>1</sup></a></span>\n <span class=\"author-block\">\n\n </div> -->\n\n <!-- <div class=\"is-size-5 publication-authors\">\n <span class=\"author-block\">\n <sup>1</sup> Hong Kong University of Science and Technology | <sup>2</sup>Horizon Robotics <br>\n </span>\n...\n```" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Could you provide more details on the implementation specifics of training on 32 NVIDIA 4090 GPUs?\n2. What is the resolution of the methods reported in Table 1? Also, could you explain why longer videos result in a slightly higher FVD (e.g., 122.7 vs 174.4)?\n3. Could you elaborate on the computational costs associated with integrating self-attention into the Temporal-aware Image Tokenizer?\n4. How does the performance of DrivingWorld compare to that of the Llama series?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The proposed temporal-aware Image Tokenizer achieves the highest scores across all four metrics, indicating robust performance.\n2. The paper reflects a significant effort and comprehensive workload." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, a GPT-style model called DrivingWorld is introduced. DrivingWorld incorporates several spatial-temporal fusion mechanisms to effectively model both spatial and temporal dynamics, enabling high-fidelity, long-duration video generation. Experiments confirm that this proposed method is capable of producing high-fidelity and consistent video clips over extended periods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. There appears to be an error in line 488 regarding the batch size; it might be beneficial to verify whether it should be 32 instead of 3.\n2. The experiments indicate that the RMS strategy does not perform optimally. It might be beneficial to assess its effectiveness under a full experimental setting.\n3. The experiments are conducted solely on nuPlan or nuScenes datasets. The generalization capability of the 1B model across different datasets needs further evaluation to ensure it does not overfit to the nuPlan series dataset.\n4. The concept of the World Model may be considered controversial and could benefit from further clarification to establish its acceptance and validity within the field." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@misc{\nhu2024drivingworld,\ntitle={DrivingWorld: Constructing World Model for Autonomous Driving via Video {GPT}},\nauthor={Xiaotao Hu and Wei Yin and Mingkai Jia and Junyuan Deng and Xiaoyang Guo and Qian Zhang and Xiaoxiao Long and Ping Tan},\nyear={2024},\nurl={https://openreview.net/forum?id=xJtWqVBZya}\n}" }, "abstract": { "value": "Recent successes in autoregressive (AR) generation models, such as the GPT series in natural language processing, have motivated efforts to replicate this success in visual tasks. By leveraging the next-token prediction strategy, GPT-style models can forecast future events from past data. Some research aims to extend this approach to autonomous driving by building video-based world models capable of generating realistic future video sequences and predicting the ego state. However, the prior works tend to produce unsatisfactory results, since the classic GPT framework is designed to handle 1D contextual information, such as text, and lacks the inherent capability to model the spatial and temporal dynamics necessary for video generation. In this paper, we present DrivingWorld, a video-based world model for autonomous driving via a new GPT structure with spatial-temporal design. The key idea is to disentangle temporal and spatial information in the generation. Specifically, we first propose next-frame-prediction strategy to model temporal coherence between consecutive frames and then apply next-token-prediction strategy to capture spatial information within a frame. With the hybrid design, our work is capable of producing high-fidelity and consistent video clips with long-time duration. Experiments show that compared to the prior works, our method presents better quality of visual effects and more accurate controllable future video generation." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Xiaotao_Hu1", "~Wei_Yin2", "~Mingkai_Jia1", "~Junyuan_Deng1", "~Xiaoyang_Guo1", "~Qian_Zhang7", "~Xiaoxiao_Long2", "~Ping_Tan2" ] }, "authors": { "value": [ "Xiaotao Hu", "Wei Yin", "Mingkai Jia", "Junyuan Deng", "Xiaoyang Guo", "Qian Zhang", "Xiaoxiao Long", "Ping Tan" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "world model", "video generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "hu|drivingworld_constructing_world_model_for_autonomous_driving_via_video_gpt" }, "pdf": { "value": "/pdf/aad7fc36550d4db84152c09c15aae1687253abc5.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "DrivingWorld: Constructing World Model for Autonomous Driving via Video GPT" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xKDZAW0He3
SeCom: On Memory Construction and Retrieval for Personalized Conversational Agents
main
Active
memory management;conversational agent;RAG;text segmentation;prompt compression
applications to computer vision, audio, language, and other modalities
3;6;6;6
5;4;4;3
2;3;3;3
2;3;3;3
3;4;3;3
5.25
4
2.75
2.75
3.25
-0.816497
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please conduct additional experiments to demonstrate that the potential weaknesses do not exist." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The article is well-written and accessible.\n2. Innovative memory granularity: It introduces segment-level memory construction, overcoming the limitations of turn-level and session-level methods, allowing for better capture of conversation structure.\n3. The use of LLMLingua-2 for compression and denoising improves memory retrieval accuracy and reduces computational load.\n4. The paper demonstrates SECOM’s significant advantages in memory retrieval and response quality through experiments on multiple benchmark datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper effectively breaks down the memory construction process into several phases: segmentation, denoising, and retrieval. This structured approach not only aids in conceptualizing the entire workflow but also provides a solid foundation for systematically enhancing each stage. SECOM further refines the process by incorporating compression-based denoising to improve memory retrieval. These adjustments reflect a well-thought-out strategy for optimizing how information is segmented, processed, and retrieved from long-term conversational memory. I believe the proposed method is both logical and persuasive." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the paper proposes segment-level memory construction, it lacks in-depth analysis of how to determine the optimal segment granularity, and the specific impact of different granularities on performance is not fully explored.\n2. The paper does not thoroughly explore the performance of the SECOM method in dialogue systems of different scales or across different domains. Specifically, the scalability and generalization of the model in resource-constrained environments are not analyzed in detail." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "(please also see weaknesses)\n\n* How many segments per session do you get for the datasets you experiment on? \n\n* Is it possible that the optimal segmentation depends on the query? For instance, round 1, 2 may be about topic A but round 2, 3 may be about topic B. Depending on whether A or B is asked, the segmentation method could be different?\n\n* Have you conducted any human evaluation of the correctness of the segmentation? If so, what are the common error patterns?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The granularity of the memory is an important design choice. Similar to [1] for general RAG, this work fills in the gap in the domain of conversational agents. The conclusions established here should be impactful for future work in this domain. \n\n* The proposal is verified in a number of settings, including various benchmarks and models. This comprehensiveness bolsters the validity of the work. I also appreciate testing the result with various evaluation metrics.\n\n* Overall, the writing is clear. The proposal is motivated and formulated well.\n\n[1] Dense x retrieval: What retrieval granularity should we use? Chen et al., 2023." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper rethinks the memory granularity for chat agents that rely on retrieval-augmented generation to memorize history information. Based on empirical insights, this paper proposes a new granularity for segmenting the cat history that lies between turn and session. In addition, the authors propose to use a prompt compression method to further compress the history before presenting it to the reader language model. Experiment results show that the proposed approach is effective in a number of long-term dialogue settings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* Based on my understanding, SeCom is only tested to work with GPT-4 as the segmentation model, which is massive, costly, and thus unrealistic for many real-world applications. I would like to see a discussion on how well the framework can work without GPT-4-level models, even with some fine-tuning like in [1]. \n\n* While adding a denoising stage is an innovative proposal, there is no technical novelty in the denoising methodology itself as this work directly uses a previous prompt compression method. \n\n* Other design aspects of the memory retrieval are ignored. For instance, it seems that this work always uses the value itself as the key, while a more advanced indexing approach could possibly reduce the need for a more “correct” value granularity [2]. \n\n[1] Dense x retrieval: What retrieval granularity should we use? Chen et al., 2023. \n\n[2] HippoRAG: Neurobiologically Inspired Long-Term Memory for Large Language Models. Gutiérrez et al., 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. How does Figure 2 support the following statement? “Our findings indicate that turn-level, session-level, and summarization-based methods all exhibit limitations in terms of the accuracy of the retrieval module as well as the semantics of the retrieved content, which ultimately lead to sub-optimal responses, as depicted in Figure 1, Figure 2, and Table 1.”\n2. In Figure 3(a), the formula for compression rate is given as compression rate = (#tokens after compression) / (#tokens before compression). If both “tokens after compression” and “tokens before compression” are 100, then the compression rate is 100%? Shouldn’t the compression rate be 0% in this case?\n3. In Table 1, why does BM25 perform better on LOCOMO, while MPNet performs better on Long-MT-Bench+?\n4. If traditional topic segmentation methods were used instead of SECOM in Table 4, what experimental results would be observed?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper adopts prefix-tuning and reflection to optimize segment prompts, further adapting them to large language models.\n2. A denoising technique is used to reduce redundant information in dialogue, thereby improving retrieval accuracy.\n3. The writing is well-organized, with careful and precise use of mathematical symbols." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes the SeCom framework for memory construction and retrieval in dialogue processes. Unlike previous work that uses dialogue-level, session-level, or turn-level units to construct memory, this paper adopts segment-level, essentially topic-level, content to build memory. Additionally, a denoising operation is applied during retrieval to improve accuracy. The final experiments demonstrate that using segment-level as the memory unit effectively enhances QA results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Using segment-level content as the dialogue unit is not novel.\n2. The dialogue segmentation method employed is an integration of existing methods, with no specific improvements or optimizations tailored for dialogue memory purposes." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "see Weaknesses" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "SeCom constructs a memory bank with topical segments and utilizes LLMLingual-2 for denoising, improving memory retrieval accuracy and reducing noise from redundancy in natural language. Experiments show that SeCom outperforms turn-level and session-level baselines and state-of-the-art methods on LOCOMO and Long-MT-Bench+. Additionally, it excels in dialogue segmentation tasks such as DialSeg711 and TIAGE.Further analysis and ablation studies confirm the contributions of the segment-level memory units and the compression-based denoising technique." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper constructs a memory bank with topical segments and utilizes LLMLingual-2 for denoising, thereby improving memory retrieval accuracy and reducing noise from redundancy in natural language. Experiments show that this paper outperforms turn-level and session-level baselines, as well as state-of-the-art methods. Additionally, it excels in dialogue segmentation tasks.\nFurther analysis and ablation studies confirm the contributions of the segment-level memory units and the compression-based denoising technique." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Dialogue Scope Limitations**: Conversational agent should encompass a broader range of dialogue types including task-oriented conversations, chit-chat, role-playing dialogues and conversational question answering. Currently, the evaluation is restricted to question-answering tasks, specifically single-turn interactions. \nThis narrow focus undermines the conversational agent's versatility and applicability in real-world scenarios. To provide a more comprehensive assessment, additional experimental results across various dialogue types should be included.Task-oriented datasets like MultiWOZ, open-ended conversation datasets like PersonaChat or multi-turn question answering datasets like CoQA are needed.\n- **Evaluation Methodology**: The evaluation methods primarily rely on auto-machine metrics such as bleu, roughl, bertscore and gpt4score based on large language model(LLM), which has significant limitations and may not fully capture the nuances of human interactions. The absence of human evaluations is a significant. \nIncorporating human assessments such as response relevance, coherence and factual accuracy would provide a richer perspective on the conversational agent's performance. Moreover, establishing a consistency measure between human evaluations and auto-machine metrics is crucial. \nGiven the large language model often generate lengthy responses, some of which may be valid yet not included in the standard answers, human insights could enhance the credibility of the results.\n- **Inclusion of Recent Methods**: The evaluation baselines currently lack consideration of the latest memory-enhanced methods, such as MPC and COMEDY discussed in the related work. These methods represent significant advancements in the memory with the compression of long term conversation and should be included in comparative analyses to ensure a robust evaluation framework." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A system facilitates long-term conversational agents by constructing a memory bank at segment level while applying compression-based denoising to enhance memory retrieval." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024secom,\ntitle={SeCom: On Memory Construction and Retrieval for Personalized Conversational Agents},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xKDZAW0He3},\nnote={under review}\n}" }, "abstract": { "value": "To deliver coherent and personalized experiences in long-term conversations, existing approaches typically perform retrieval augmented response generation by constructing memory banks from conversation history at either the turn-level, session-level, or through summarization techniques.\nIn this paper, we explore the impact of different memory granularities and present two key findings: (1) Both turn-level and session-level memory units are suboptimal, affecting not only the quality of final responses, but also the accuracy of the retrieval process.\n(2) The redundancy in natural language introduces noise, hindering precise retrieval. We demonstrate that *LLMLingua-2*, originally designed for prompt compression to accelerate LLM inference, can serve as an effective denoising method to enhance memory retrieval accuracy.\n\nBuilding on these insights, we propose **SeCom**, a method that constructs a memory bank with topical segments by introducing a conversation **Se**gmentation model, while performing memory retrieval based on **Com**pressed memory units.\nExperimental results show that **SeCom** outperforms turn-level, session-level, and several summarization-based methods on long-term conversation benchmarks such as *LOCOMO* and *Long-MT-Bench+*. Additionally, the proposed conversation segmentation method demonstrates superior performance on dialogue segmentation datasets such as *DialSeg711*, *TIAGE*, and *SuperDialSeg*." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "memory management", "conversational agent", "RAG", "text segmentation", "prompt compression" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/13f0c473967918a221be778705bb6f59cf5da975.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "SeCom: On Memory Construction and Retrieval for Personalized Conversational Agents" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xLPakPOKDX
Causally Motivated Diffusion Sampling Frameworks for Harnessing Contextual Bias
main
Active
Causal Inference;Diffusion Models;Contextual bias;Spurious Correlations;Object Cooccurrence;StableDiffusion
generative models
3;5;6;6
5;4;4;3
2;3;3;3
2;2;2;3
2;2;2;3
5
4
2.75
2.25
2.25
-0.866025
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "see weaknesses" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper introduces a novel, causally motivated approach to address contextual bias in diffusion models, which effectively enhances image diversity and fidelity without requiring retraining or extensive data.\n\n2. The proposed methods are validated on multiple large-scale datasets, such as Visual Genome and COCO, demonstrating consistent performance improvements in key metrics like FID and LPIPS.\n\n3. The framework is adaptable and efficiently addresses contextual bias within the diffusion process, broadening the application scope of diffusion models for diverse and controlled image generation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a causally motivated approach to enhance image diversity and fidelity in large-scale diffusion models by addressing contextual bias, without the need for retraining or extensive data access. The proposed methods involve causality-inspired techniques to modulate the influence of contextual information during the diffusion process, thus balancing realistic image generation with diverse outputs. Through experiments on datasets like Visual Genome and COCO, the approach demonstrates significant improvements in metrics such as FID and LPIPS compared to standard diffusion models. This work contributes a novel framework for controlled image synthesis, enabling broader applicability of diffusion models in creative and diverse image generation tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. There is a lack of robustness when the sampled confounder $𝐶′$ is semantically distant from the prompt $𝑌$, leading to generated images that may ignore the confounder altogether​. Besides, the framework’s dependence on predefined confounders may limit its flexibility when generating images outside of commonly biased contexts, reducing adaptability in less standardized environments.\n2. The approach depends on complex causal graphs and sampling chains, which may lead to higher computational demands and slower generation times, limiting its scalability​.\n3. Some generated images may exhibit unnatural object combinations, particularly when weakening contextual bias, which might detract from the realism of the results​. \n4. While the framework introduces techniques to adjust contextual bias, it does not provide a quantitative evaluation of how well these adjustments meet specific user-defined objectives or bias levels." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. I have a simpler baseline to suggest instead of the multi-step sampling chain in Eq. 6: use a VLM on the original $(x, y)$ as: \"*given this image, describe a list of common nouns that do not occur in this scene, but could be reasonably expected to co-occur and increase the diversity*\" – this will give you $c’$. You can also check for errors by asking the VLM to extract nouns from the scene (which you already do, *L270*) and the CB- technique (*L237-241*) and removing them if they are extracted from the above prompt. It is unclear to me if the sampling chain would outperform this baseline, especially because while marginalizing over truly unconditional samples as in Eq 6 *will* increase scene diversity, but can give you objects that are very out of place (e.g. Fig 4, a tree in a bedroom, motorcycles in a kitchen, etc.) This is merely an alternate suggestion, but I would like to hear the authors' thoughts about why this may or may not work compared to CB-.\n2. I am a little confused by Eq 6. The starting point is marginalizing the likelihood of prompt $y$ given image $x’$ over all unconditionally generated images $x’$, which you do with a VLM. To compute this, you need to marginalize over *all* possible unconditionally generated images, which is intractable. A reasonable empirical approximation is to get a large collection of (hopefully diverse) unconditional images and marginalize over them, which is what I believe you are doing? **(a)** How many images $x’$ do you marginalize over? Is it $10000$ as you write in Fig. 1 and is this from COCO test+val (*L387*)? Please provide these details explicitly in Sec 3.3 **(b)** Are these diverse enough to get the non-co-occurring conditionings you need to reduce contextual bias? I suggest a small comment or discussion towards this question.\n3. I would like more information on using CB+ and CB- with other conditionings (*L472-L483*), as I find this quite interesting and practically relevant for the community – a more explicit description of how alternate conditionings (e.g. ControlNet content or DEADiff style) can be used complementary to CB+ and CB- would be helpful" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This is a relevant and important problem for the community, and I found it well motivated. I appreciate the nuance in stating contextual bias is “not inherently bad” (L57-58) and providing two frameworks to tweak the bias in both directions\n2. There are many experiments examining specific aspects of the framework, e.g. its impact on realism and diversity of generated images (Tab.1, 4), adherence to the original prompt (Tab 3), qualitative results (Fig 5), and its complementary nature with other frameworks (Fig 6)\n3. The CB- framework is a very interesting contribution - if one is able to learn a \"good\" confounder distribution, it may help adding more diverse contextual biases to generative models" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose two causally-motivated sampling frameworks for Latent Diffusion Models, which either increase or decrease their contextual biases. \n\n1. CB+ increases the bias by using an LLM to describe confounders (objects) in a scene, and conditioning LDM sampling on these confounders\n2. CB- decreases the bias by \"retrieving\" confounders, marginalized over the distribution of unconditionally generated images. This attempts to retrieve confounders (objects) which are not explicitly co-occuring with the original scene, thereby increasing scene diversity.\n\nThe authors present results on Visual Genome and use COCO to sample confounders." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tWriting flow needs improvement. I found myself having to skip ahead to find where things were introduced or explained in the writing, and in some areas I was left with no clear answer (see below)\n2.\tWhat LDM is used for experiments (Fig 2, 4-6) ? Visually, it appears to be something similar to older versions of Stable Diffusion (e.g. 1.4, 2.1) From *L509*, it doesn’t seem to be SDXL throughout. This is very unclear and greatly detracts from being able to contextualize the results (some details in Weakness 3). Please explicitly add these details for all experiments.\n3.\tIt appears that CB+ is replacing the contextual bias of the LDM with the contextual bias of the LLM (Gemini), and CB- is doing the same with a VLM (LlaVa). Assuming that the LDM is a slightly weaker model (see Weakness 2) detracts slightly from the FID comparisons Tab.1 – Gemini and LlaVA have: 1. much higher capacity (# params) 2. Much larger pretrain datasets than older Stable Diffusions, and thus their contextual biases may be of much higher quality. This makes it harder to make a fairer comparison. If all these results are with SDXL, a much stronger LDM, this is less of a concern (but this is unclear and should be specified) \n4.\tHow the retrieved confounder $c’$ is used practically in CB+ and CB- is very unclear (I understand the math from Eq. 3 and Eq. 7). Are you adding the nouns from $c’$ directly to the prompt $y$ to generate a new image $x$? From *L387*, it sounds like you do not do this. My understanding of CB- (mostly from *L386-395*) is that you generate 10K images from COCO test+val captions and extract a set of nouns from all these images. You then randomly add these nouns to the prompt *y* (since you are not conditioning *y | c’*). In my opinion, this is the primary weakness of this work. Please provide a step-by-step description of how $c'$ is incorporated in the image generation process in practice for both CB+ and CB-.\n5. The captions of figures need to be more self-contained. It is quite hard to understand them without referring back and forth from the text." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to the concerns raised in the weaknesses section above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper provides an interesting formulation for the problem of contextual bias in image generations in the context of causal inference. This provides a novel perspective for thinking about how contextual bias influences generated images." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the issue of contextual bias in image generation within the framework of causal inference. By leveraging this formulation, the authors propose two methods to either strengthen or weaken the influence of contextual biases during the image generation process. These methods rely on utilizing LLMs or VLMs to modify text prompts. The authors suggest that these adjustments will lead to more diverse and realistic generated results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tWhile this paper formulates the problem of contextual biases in image generation using causal graphs and confounders, this formulation is overcomplicated and unnecessary for addressing the problem at hand. Although the theoretical framing is interesting, the proposed method largely boils down to a refined form of prompt engineering.\n2.\tThe major concern for this paper is the novelty of the proposed method. Retrieving co-occurring objects using LLMs and identifying objects appearing in images using VLMs is trivial and has been commonly practiced in the task of text-to-image generation. The integration of LLMs and VLMs for prompt engineering is widely known and not innovative.\n3.\tDealing with contextual biases in image generation is not a particularly challenging task for modern diffusion models like Stable Diffusion, DALLE, and Flux. These models are highly capable of generating diverse and complex images based on input prompts. They can easily generate unconventional combinations like “astronaut riding a horse on Mars” with prompt engineering along, without the need for special techniques to bypass contextual biases.\n4.\tThe experiment should include more challenging cases that truly require causal modeling to demonstrate the significance of the approach. Without such cases, the relevance of the method remains limited." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "This paper does not have ethical concerns that are worrisome." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1.To estimate the distribution of confounding factors, this method requires multiple sampling rounds from LLMs and VLMs; for example, generating a single image may involve at least 10 VLM calls, which is time-consuming. In line 282, the author mentions pre-sampling methods. Could you clarify the time required for this preprocessing, the approximate space complexity, and provide a detailed breakdown of the steps taken to reduce computation time? This part appears somewhat unclear.\n\n2.Would it be beneficial to include a deeper discussion on the connection between diffusion bias and LLM/VLM bias, as this is a central focus of your research? Exploring whether a gap exists, if it can be quantified, and supporting this with further literature could enhance the work." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This work insightfully recognizes that contextual bias is not inherently negative. Contextual bias can be a crucial component in generating natural scenes, and enhancing control over it is essential. Through causal graph modeling, combined with the robust reasoning capabilities of LLMs/VLMs, they automatically (a key point) adjust the \"amount of embedded commonsense\" in the generated image results without any explicit training.\n\nThe highlight of this work is its approach to embedding causal intervention mechanisms in generative models, enabling automated confounder estimation to implement do-operations. While the use of causal graphs in the CV field has been widely discussed, the authors' combination of these methods with generative modeling is novel and yields promising results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper examines the phenomenon where diffusion models tend to generate images with certain preferences due to biases inherent in the training dataset. By applying causal learning framework, the influence of confounding factors is either enhanced or mitigated, thus making the generated images lean more toward \"commonsense\" or \"counter-commonsense\" representations. Notably, the authors leverage another form of bias, that present in LLMs/VLMs, to sample and estimate the distribution of confounding factors. And my understanding is that the core of this paper is not the diffusion model itself but rather the use of LLMs and VLMs to distinguish between commonsense and counter-commonsense content." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Hallucination of Large Model (Especially in Vision Language Model)\n\nThe limitations section highlights the constraints of VLMs. It is important to note that VLMs often experience hallucination issues. For example, when both a horse and a donkey appear in an image, the VLM may incorrectly label both as “donkey.” This is not a “beneficial” bias (as the authors point out in the paper) but rather a “harmful” hallucination issue, leading to inaccurate probability estimation when applying the Do operator. \n\nFurthermore, how can we ensure that the commonsense knowledge (referred to as bias in the paper) embedded in LLMs/VLMs aligns with that in the Diffusion Model? After all, these models are trained on different datasets and strategy. If there is inconsistency in commonsense between them, then using LLMs/VLMs to estimate P(c) may not be an ideal approach. For instance, if we aim to generate an image of a \"bird,\" LLMs/VLMs might associate \"bird\" with \"tree,\" while the Diffusion model may associate \"bird\" with \"sky.\" In such cases, inconsistencies in bias arise. I believe expanding the discussion on this point could be beneficial to your work.\n\n2.While the intervention mechanism is detailed, control over diffusion remains overly rough.\n\nThe core contribution of this paper is not in Diffusion itself but rather in using LLM/VLM combined with causal inference to extract a text segment \"c\" from the prompt. This prompt + \"c\" serves as a new conditional input for generation, with different \"c\" segments assigned distinct weights, ultimately leading to weighted summation on the latent space or score space. \n\nHowever, this approach relies heavily on text prompt conditioning, limiting its effectiveness in handling complex cases. For instance, even with highly detailed prompts, the SD model may occasionally ignore specified objects in the prompt [1], which constrains the capability of the proposed algorithm. A insightful work in this area can be found in [2], which delves deeper into the mechanisms of diffusion latent space, enabling the generation of counterintuitive or \"counter-commonsense\" images.\n\n[1] Chefer, Hila, et al. “Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models.” ACM Transactions on Graphics, vol. 42, no. 4, July 2023, pp. 1–10. Crossref, https://doi.org/10.1145/3592116.\n\n[2] Um, Soobin, and Jong Chul Ye. \"Don't Play Favorites: Minority Guidance for Diffusion Models.\" arXiv preprint arXiv:2301.12334 (2023)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024causally,\ntitle={Causally Motivated Diffusion Sampling Frameworks for Harnessing Contextual Bias},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xLPakPOKDX},\nnote={under review}\n}" }, "abstract": { "value": "Diffusion models have shown remarkable performance in text-guided image generation when trained on large-scale datasets, usually collected from the Internet. These large-scale datasets have contextual biases (e.g., co-occurrence of objects) which will naturally cascade into the diffusion model. For example, given a text prompt of ``a photo of the living room'', diffusion models frequently generate a couch, a rug, and a lamp together while rarely generating objects that do not commonly occur in a living room. Intuitively, contextual bias can be helpful because it naturally draws the scene even without detailed information (i.e., visual autofill). On the other hand, contextual bias can limit the diversity of generated images (e.g., diverse object combinations) to focus on common image compositions. To have the best of both worlds, we argue that contextual bias needs to be strengthened or weakened depending on the situation. Previous causally-motivated studies have tried to deal with such issues by analyzing confounders (i.e., contextual bias) and augmenting training data or designing their models to directly learn the interventional distribution. However, due to the large-scale nature of these models, obtaining and analyzing the data or training the huge model from scratch is beyond reach in practice. To tackle this problem, we propose two novel frameworks for strengthening or weakening the contextual bias of pretrained diffusion models without training any parameters or accessing training data. Briefly, we first propose causal graphs to explicitly model contextual bias in the generation process. We then sample the hidden confounder due to contextual bias by sampling from a chain of pretrained large-scale models. Finally, we use samples from the confounder to strengthen or weaken the contextual bias based on methods from causal inference. Experiment results show that our proposed methods are effective in generating more realistic and diverse images than the regular sampling method." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Causal Inference", "Diffusion Models", "Contextual bias", "Spurious Correlations", "Object Cooccurrence", "StableDiffusion" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/265213177d0c9204a6615826a647e51039755ee1.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Causally Motivated Diffusion Sampling Frameworks for Harnessing Contextual Bias" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xMOLUzo2Lk
EIA: ENVIRONMENTAL INJECTION ATTACK ON GENERALIST WEB AGENTS FOR PRIVACY LEAKAGE
main
Active
Web Agent;Attack
alignment, fairness, safety, privacy, and societal considerations
3;6;6;6;8
3;3;3;5;3
3;3;3;3;4
1;3;3;3;3
3;4;3;4;3
5.8
3.4
3.2
2.6
3.4
0.0625
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "address threat model and evaluation questions, integrate CI discussion." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- the paper reads well and presentation format is easy to follow\n- the problem set is picked forward looking and justified well\n- evaluation includes multiple different language models \n- the attack is stealthy and hard to notice and is accompanied by visual examples" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper describes a setting where a malicious website tries to interfere with the web agent and attempts to exfiltrate user's PII or the whole request. As more users might be delegating their tasks to agents they have to entrust these agents with their data making a privacy problem." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- it is not clear whether targeted PII data is really vulnerable, as if the user themselves would go to the website they would share exactly same data and I am not sure that the full query is sensitive. I would appreciate more examples why this is a privacy problem.\n\n- the paper needs more evaluation for different scenarios and tasks (kind of Appendix F but with attack results). How does attack effectiveness vary across prompts and different PII data.\n\n- Add connection to contextual integrity[1,2 + others related] -- it is related as user's delegating their data cannot know in advance what data is needed for the particular task and it's important to follow CI. Might be a useful argument why the proposed threat model matters -- LLMs can be trusted with private data (except for the proposed attack).\n\n[1] Mireshghallah, Niloofar, et al. \"Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory.\" ICLR'24\n[2] Bagdasaryan, Eugene, et al. \"Air Gap: Protecting Privacy-Conscious Conversational Agents.\" arxiv (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Can the authors elaborate on the practical difficulty of implementing EIA in a real-world scenario, especially considering the variability of web designs and how attackers could adapt to different environments?\n\n2. Are there any promising directions for future work on defenses, beyond the defensive system prompts and traditional malware detection tools, that could mitigate EIA without compromising the agent’s functional integrity?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. Novelty of Attack: The concept of EIA, which blends form and mirror injections into web environments, is innovative. The focus on environmental adaptation to manipulate web agents without disrupting their primary tasks is a valuable contribution to the field of adversarial attacks.\n\n2. Comprehensive Threat Model: The authors present a well-defined threat model that details two distinct adversarial targets (stealing PII and full requests) and realistic attack scenarios, making the study relevant for real-world applications of generalist web agents.\n\n3. Impact on Web Security: The discussion on how traditional web security tools (e.g., VirusTotal) fail to detect EIA is insightful, as it highlights the gap in current defenses against these new forms of attacks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel attack method, Environmental Injection Attack (EIA), targeting generalist web agents. It aims to expose privacy risks by manipulating web environments through injections that mislead web agents into leaking Personally Identifiable Information (PII). The attack leverages form and mirror injection strategies to adapt malicious content to different parts of a webpage. The authors provide experimental evidence showing that EIA can achieve up to a 70% success rate in leaking specific PII and a 16% success rate in leaking full user requests. The paper highlights the stealthiness of EIA and discusses the limitations of current defense mechanisms like system prompts." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Limited Discussion on Practical Mitigations: While the paper evaluates system prompts as a defense and highlights their limitations, the mitigation strategies remain underdeveloped. It would be beneficial to provide a more detailed exploration of potential defenses (both on web agents and web environments) that could address this new type of attack.\n\n2. Over-reliance on Specific Frameworks: The experiments are largely based on the SeeAct framework, which, while advanced, may not fully represent the broad landscape of generalist web agents. Testing EIA on a wider variety of web agents or frameworks would improve the generalizability of the results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How does the offline evaluation impact the understanding of the web agent’s performance and attack impact in dynamic, real-time environments? Could additional real-time experiments be conducted to address this?\n\n2. To what extent do the attack strategies generalize beyond the specific web agent framework (SeeAct) and dataset used in the study? Would further testing on diverse frameworks and datasets strengthen the findings?\n\n3. What does the original, clean aria-label look like in comparison to the injected prompt in Figure 2? Could this be provided to clarify how the injection appears in context?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- **Novelty and Relevance**: The paper introduces a **novel attack method**, Environmental Injection Attack (EIA), which addresses a significant gap in the literature regarding privacy risks posed by generalist web agents. This is highly relevant given the increasing use of such agents in handling sensitive tasks online.\n- **Comprehensive Evaluation**: The authors conduct **extensive experiments** using a state-of-the-art web agent framework and a realistic dataset (Mind2Web). The results are robust, demonstrating the effectiveness of EIA in various scenarios and providing valuable insights into the vulnerabilities of web agents.\n- **Practical Implications**: The paper discusses **realistic attack scenarios** and provides a detailed analysis of potential defenses. This practical focus enhances the paper's impact, offering actionable recommendations for improving the security of web agents in real-world applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper discusses the Environmental Injection Attack (EIA), a novel method targeting generalist web agents to exploit privacy vulnerabilities. EIA involves injecting malicious content into web environments to steal users' Personally Identifiable Information (PII) or entire user requests. The study demonstrates that EIA can achieve up to a 70% success rate in stealing specific PII and 16% in full user requests. The paper highlights the difficulty in detecting and mitigating these attacks, emphasizing the need for advanced defense strategies to protect web agents and user privacy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Limited Online Scenario**: While the experiments are thorough, the evaluation is conducted **offline** and does not fully assess the web agent’s capabilities in a real-time interactive environment. This limits the understanding of the attack's impact in dynamic, real-world settings. I know the authors have pointed out this, but this is indeed a weakness in my view.\n\n- **Generalization of Results**: The study focuses on a specific web agent framework (SeeAct) and a particular dataset. While the authors argue that the attack strategies are applicable to other web agents, **additional experiments** on different frameworks and datasets would help validate the generalizability of the findings.\n\n- **Unclear Visibility**: In Figure 2, I am curious about the appearance of the original, clean aria-label. Although the authors describe the injected prompt, “This is the right place to input the recipient name,” as normal and benign, it appears somewhat abrupt and out of place in this context. I would appreciate seeing what the clean or original aria-label looks like for comparison." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "The authors have included a detailed Ethical Statement in the submission." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "See the weakness part." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is generally well-written. The concepts and methodology are well structured and easy to follow.\n\n2. The authors propose diverse types of EIA based on attacking objectives, injection positions, attacking strategies, opacity values, etc. \n\n3. The experimental results also validate the effectiveness of the proposed attacking methods.\n\n4. The discussion on countermeasures makes the paper more complete." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies a meaningful and practical safety problem of current LLM-based agents. The authors propose a new Environment Injection Attack (EIA) paradigm, in which the attacker aims to make the agent expose the privacy information in the user query while completing the task successfully. They propose 2 types of EIA strategies, Form Injection and Mirror Injection. Experiments on 3 LLM-based agents and Mind2Web dataset show that it is feasible to achieve the EIA target. The authors also include discussions on some countermeasures." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. My major concern is about the types of datasets used in the main experiments. The authors only include the 177 queries from Mind2Web in experiments, which may not be comprehensive. Including experiments and analysis on other datasets or tasks (e.g., Web Shop) may make the paper more convincing.\n\n2. The results of the Sensitivity to Injection Position are interesting. Could the authors make more explanations about why the results of $P_{+}$ are generally better than that of $P_{-}$?\n\n3. The content of the Discussion part is too lengthy. Some content could be put into the Appendix. For example, the authors could brief the part of Human Supervision and Implications over Defenses in Pre- and Post-Deployment of Websites in the main text, and put the detailed content in the Appendix. And I think Figure 21 should put in the main text, because Relaxed-EIA is also one of the strategies proposed in this paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "What realistic scenarios would make your attack viable but not simpler attacks? This is one of my biggest issues with the work. Specifically, if you could outline instances in which this attack would be *uniquely* effective that would be of aid." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Pros: This paper is quite fun and it’s nifty to see clever ways of tricking LLMs. I also quite liked the technique of setting an element to very low opacity (and one would also make it very small) in an attempt to hide it from a user but still make it visible to an LLM–there are cleverer ways of doing this with HTML, but still nice!\n\nThe evaluation was also reasonable and did demonstrate that under the presumptions made by the authors, the attack does work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper considers an attack whereby an adversary maliciously injects elements into a website which trick agent models into entering PII or sensitive data. By using ID names or other HTML modifications that suggest that a given <input> node is the destination for PII, a malicious actor may be able to exfiltrate data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "but still nice!\n\nI am also very unpersuaded by the evaluation of the defensive prompt. The authors claim that defensive prompting as a technique is not efficacious, and yet they test a single prompt that does not appear to have been developed in a systematic fashion (that would otherwise buttress the claim). The authors could improve this evaluation by providing a repeatable methodology for testing potential defensive prompts to demonstrate that this is more than an artifact of the specific one chosen. \n\nThe authors also cite the xz backdoor, but the threat model in that case seems very divorced from the model proposed in the paper. If the claim is that there can be bad code in open source, then one runs into the same issue as before of the peculiar threat model under consideration herein. Specifically, if an actor has that level of access to code that will run on a client, the LLM portion of the proposed attack would be extraneous.\n\nVirusTotal also seems like the wrong choice for testing if an attack is well hidden or not. VirusTotal is largely a tool for detecting known threats via signatures or blacklists–it is thus not really a great indicator of how stealthy an attack is. I'm not sure there's yet a way of proving an attack in this context is stealthy or not and would suggest just removing the claim or else putting more thought to how this can be more conclusively demonstrated." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We examine privacy risks of generalist web agents, proposing a realistic threat model and introducing the Environmental Injection Attack (EIA). EIA effectively steals users' private information while remaining difficult to detect and mitigate." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024eia,\ntitle={{EIA}: {ENVIRONMENTAL} {INJECTION} {ATTACK} {ON} {GENERALIST} {WEB} {AGENTS} {FOR} {PRIVACY} {LEAKAGE}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xMOLUzo2Lk},\nnote={under review}\n}" }, "abstract": { "value": "Recently, generalist web agents have demonstrated remarkable potential in autonomously completing a wide range of tasks on real websites, significantly boosting human productivity. However, web tasks, such as booking flights, usually involve users' personally identifiable information (PII), which may be exposed to potential privacy risks if web agents accidentally interact with compromised websites—a scenario that remains largely unexplored in the literature. In this work, we narrow this gap by conducting the first study on the privacy risks of generalist web agents in adversarial environments. First, we present a realistic threat model for attacks on the website, where we consider two adversarial targets: stealing users' specific PII or the entire user request. Then, we propose a novel attack method, termed Environmental Injection Attack (EIA). EIA injects malicious content designed to adapt well to environments where the agents operate and our work instantiates EIA specifically for privacy scenarios in web environments. We collect 177 action steps that involve diverse PII categories on realistic websites from the Mind2Web dataset, and conduct experiments using one of the most capable generalist web agent frameworks to date. The results demonstrate that EIA achieves up to 70\\% attack success rate (ASR) in stealing users' specific PII and 16\\% ASR in stealing a full user request at an action step. Additionally, by accessing the stealthiness and experimenting with a defensive system prompt, we indicate that EIA is hard to detect and mitigate. Notably, attacks that are not well adapted for a webpage can be detected through careful human inspection, leading to our discussion about the trade-off between security and autonomy. However, extra attackers' efforts can make EIA seamlessly adapted, rendering such human supervision ineffective. Thus, we further discuss the implications on defenses at the pre- and post-deployment stages of the websites without relying on human supervision and call for more advanced defense strategies." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Web Agent", "Attack" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/5d63126f2ce6f0222bb16158be46da063756a188.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "EIA: ENVIRONMENTAL INJECTION ATTACK ON GENERALIST WEB AGENTS FOR PRIVACY LEAKAGE" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xMxHJxp192
DeltaGNN: Graph Neural Network with Information Flow Control
main
Active
deep learning;neural network;graph neural network;topology;homophily;heterophily;over-smoothing;over-squashing;long-range interactions
learning on graphs and other geometries & topologies
3;5;5;5;6
4;4;4;4;3
2;2;3;3;3
2;2;2;3;3
2;2;3;4;2
4.8
3.8
2.6
2.4
2.6
-0.612372
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. What is the formulation of equation of $\\Theta_t\\left(\\mathbf{A}^{t-1}, K(t, \\theta)\\right.$, Score $\\left.^t\\right)$ that is used to filter the graph?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper propose an interesting idea to connect first and second delta embedding with over-smoothing and over-squashing. \n\tThe proposed two lemma demonstrate the relationship between them. \n\tSuch relationship provides insight for developing algorithm to measure and alleviate over-smoothing and over-squashing problems considering the node embeddings.\n2. The proposed metric inforation flow score can numerically find the nodes that might cause over-smoothing and over-squashing in this graphs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes DeltaGNN which considers the long-range node interaction via information flow control or semi-supervised node classification.\nThe key idea is to take use of first delta embeddings $\\Delta_u^t$ and the variance of second delta embeddings $\\mathbb{V}_t[\\Delta_u^2]$.\nIf the node is connected with the same labels, then $\\Delta_u^t$ tends to be some, since the features from neighbors are close to center embeddings.\nIf the node works as a bottleneck, then the aggregated features in each layer will have a huge difference, which might cause the big variance of $(\\Delta^2)_u^t$, denoted as $\\mathbb{V}_t[\\Delta_u^2]$.\nBased on this, information flow score is used to measure the nodes that is responsible for over-smoothing and over-squashing.\nThen, with graph filtering on edges, the graph cuts edges to increase the homophily for short-range interaction, and connect the selected components for long-range interactions.\nCombined with these two, the author proposes the DeltaGNN." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While it is good to have a numerical metric to identity the key nodes and edges for over-smoothing and over-squashing.\n\tThe connection of heterophilic graphs in DeltaGNN to these two metric is not that strong. \n\tFirst, it seems like heterophilic graphs is not related to solve the over-smoothing or over-squashing problem.\n\tSecond, if using informative flow control can perfectly solve the over-smoothing or over-squashing problem, \n\twhy model can not get perfect results?\n\tIn other words, why heterophilic graph is needed in this case? \n\tDoes the introduction of heterophilic graph will cause further questions about non-existing interactions?\n\tThis part needs to be further justified. The motivation and experiments of the reasons to use this part need to be provided.\n\n2. As a suggestion, some numerical experiments can be provided to demonstrate that with the information flow control, \n\tthe new homophilic graph can have less over-smoothing or graph bottleneck issues on real-world datasets.\n\tFor example, the homophilic ratio of a node can be calculated and compared between the original graphs and the rewired graphs.\n\n3. The experiments is a little weak, and more and larger graph datasets should be included like ogb datasets." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. In Table 1, it is evident that the Information Flow Score (IFS) method underperforms other rewiring methods when combined with the GIN model, unlike with other models. This discrepancy may be due to the fact that GIN uses sum aggregation, whereas other models typically use weighted mean aggregation. The sum aggregation in GIN likely results in a higher variance for $\\{\\sum\\limits_{v \\in \\mathcal{N}(u)}\\mathbf{M}_v \\mid u \\in \\mathcal{V}\\}$ compared to $\\{\\mathbf{M}_u \\mid u \\in \\mathcal{V}\\}.$ Consequently, the so-called 'aggregation velocity' depends not only on node features but also on node degrees. This suggests that the proposed method may not be well-suited for models that use sum aggregation. Is my understanding correct?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The suggestion in Lemma 1—that identifying nodes connected by heterophilic edges through measuring feature differences during message aggregation— appears to be constructive.\n2. The introduction effectively outlines the problems of over-smoothing and over-squashing, and provides a comprehensive overview of existing methods aimed at resolving these challenges.\n3. The proposed method for addressing the problems of over-smoothing and over-squashing is both innovative and promising. The approach involves decoupling the original graph into a homophilic subgraph and a heterophilic subgraph using the proposed information flow score. Subsequently, the method performs dual aggregation on these subgraphs to capture both short-term and long-term dependencies.\n4. The complexity of method information flow score is superior to those of other rewiring methods.\n5. The proposed method demonstrates strong performance in terms of prediction accuracy and scalability." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work targets the prevalent issues of over-smoothing and over-squashing in GNNs. It highlights that current approaches often face challenges such as high computational complexity and lack of generalizability. To tackle these issues, the authors introduce a mechanism termed 'information flow control', which employs an innovative metric known as the 'information flow score'. This mechanism is designed to mitigate over-smoothing and over-squashing while maintaining linear computational overhead. Empirical evaluations demonstrate its superior performance under constrained computational conditions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. \"$\\triangle^t_u$ can be interpreted as the velocity at which the node embeddings are aggregated at layer t.\" The concept of aggregation velocity is somehow confusing. More background knowledge and explanation, also examples, are required to help readers to understand the measurement of aggregation velocity.\n\n2. The authors propose using $(\\triangle^2)^t_u = d(\\triangle^t_u - \\triangle^{t-1}_u)$ to measure the rate of change in the rate at which node embeddings are aggregated. However, $\\triangle^t_u$ and $\\triangle^{t-1}_u$ are outputs from different layers and thus belong to different spaces. Therefore, the rationale for measuring the distance between points in these two spaces is questionable. Please provide justification for this measurement.\n\n3. The Information Flow Control (IFC) mechanism is a core component of the proposed method. Therefore, the implementation details of the IFC mechanism, including the score hill ascent framework, should be included in the main text rather than in the appendix. As currently presented, the score hill ascent framework is difficult to follow.\n\n4. In Figure 2, some subgraphs are difficult to interpret. For example, the 'feature density - feature value' plot and the 'score - node' plot could benefit from additional clarification or improved labeling. What do the different curves in the feature density - feature value plot represent? Additionally, the phrase 'and enhance the graph score' lacks clarity. A definition of the term 'graph score' would be helpful.\n\n5. The proof of Lemma 1 is difficult to follow. Specific issues are detailed in the following list. **Additional background information and explanation are needed to help readers understand the proof**.\n - In line 727, the term 'valid' is used to ensure that the assignment respects the given homophily ratio $\\mathcal{H}_u$. However, the concept of 'valid' is not clearly defined, and it is unclear how this term ensures compliance with the specified homophily ratio. Additional background information and explanation are needed to help readers understand these aspects.\n - The relationship between $\\triangle^t_u$ and the valid assignment $s$ is not explained.\n - In the equation $U(\\mathcal{H}_u)_u = \\operatorname{max}_{s\\in S}(\\triangle^t_u)$, the representation of $U$ is unclear.\n - Due to the lack of clarity, it is not possible to understand why 'any node $u$ with $\\triangle^t_u > p$ will have $\\mathcal{H}_u < \\mathcal{H}$.'\n\n1. The phrase 'as this quantity depends on the homophily of the node $u$', in line 727, requires clarification. It is not immediately apparent why this quantity should depend on the homophily of the node. A clear explanation is needed to elucidate this dependency.\n\n1. Mirror issues: a) in line 723, should \"neighbourhood $N(u)$\" be revised to \"neighbourhood $\\mathcal{N}(u)$\" to consist to notation of neighborhood? b) $\\bigoplus\\limits_{v \\in \\mathcal{N}(u)}\\mathbf{M}_u$ should be revised to $\\bigoplus\\limits_{v \\in \\mathcal{N}(u)}\\mathbf{M}_v$." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* From Table 7 in the appendix, DeltaGNN variants consume approximately 2-3 times more GPU memory than GCN on small graphs. Could the authors discuss whether this would lead to memory issues when applied to larger graphs?\n\n* Did the authors evaluate DeltaGNN on more challenging heterophilic datasets, such as Squirrel or Chameleon [3]?\n\n* [Minor] Typo in Line 181: \"$∆^t_u$ the first\" $\\rightarrow$ \"$∆^t_u$ be the first.\"" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The paper is well-written and easy to follow.\n\n* It introduces a novel connectivity measure, called the information flow score, which is supported by both theoretical analysis and empirical evidence.\n\n* DeltaGNN demonstrates consistent improvements across various datasets, outperforming all baseline methods compared in the study." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper identifies that Long-Range Interactions (LRIs) are crucial for node classification tasks. Standard GNNs struggle to capture these long range dependencies due to issues such as over-smoothing and over-squashing. To address these challenges, the authors propose information flow control, a graph rewiring mechanism. Further, the paper introduces DeltaGNN, which implements information flow control to capture both long- and short-range dependencies. The proposed method is validated on several graph datasets with varying levels of homophily and sizes." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* DeltaGNN is proposed as a scalable approach for detecting both long-range and short-range interactions. However, there are no large-scale experiments to validate this claim, as all experiments were conducted on small graphs. It would be beneficial if the authors could report results on larger homophilic datasets, such as ogbn-arXiv, as well as on large-scale non-homophilous graphs from [1].\n\n* The related work section does not adequately situate the current research within the context of existing GNN work based on Graph Filters (e.g., [3, 4]). \n\n* Lines 361-363 indicate that DeltaGNN is compared against state-of-the-art (SoTA) GNNs. However, GCN, GAT, and GIN are not the current SoTA for the chosen benchmarks. The authors should compare DeltaGNN with more recent GNNs (e.g., ACM-GCN+ / ACMII-GCN++ from [2]) to more accurately assess its effectiveness.\n\n* It is unclear why MLP is not included as a baseline in Table 1. MLP has been shown to outperform on the three non-homophilous datasets (Texas, Wisconsin, Cornell) as reported in [4]. A comparison against graph filter-based methods, such as GPR-GNN [3] or PPGNN [4], would provide further insights into the performance of DeltaGNN.\n\n---\n[1] Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods, NeurIPS 2021\n\n[2] Revisiting Heterophily For Graph Neural Networks, NeurIPS 2022\n\n[3] Adaptive Universal Generalized PageRank Graph Neural Network, ICLR 2021\n\n[4] A Piece-Wise Polynomial Filtering Approach for Graph Neural Networks, ECML 2022" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. the information flow score, which identifies graph bottlenecks and heterophilic node interactions,\n\n2. In definition 1, the first Delta embeddings look like the \"norm\" of the high-pass filtered graph signal or the neighborhood diversification[4]. The second Delta embedding is a new and interesting one.\n\n3. So how can lemma 1 and 2 offer insights into the graph’s homophily and topology? Explain with sentences.\n\n4. How did you get equation (2)? Why \"nodes with low values of this measure are likely to correspond to regions where over-smoothing and over-squashing occur\"?\n\n5. \"The long-range dependencies are then learned via a GNN heterophilic aggregation.\" What is \"heterophilic aggregation\"? Do you mean aggregation from long-range nodes in different classes? Are such long-range dependency beneficial?\n\n6. \"This concept of homophily-based interaction-decoupling is crucial to prevent over-smoothing by avoiding using a standard GNN aggregation on heterophilic edges.\" The \"decoupling\" is indeed important, for example in [4], the authors use 3-channel architectures to address heterophily. But the objective is not to prevent over-smoothing, it is to improve node distinguishability [5]. A direct proof on why and how your proposed method can improve node distinguishability is recommended.\n\n7. Missing comparison with some SOTA models on heterophilic graphs, e.g. [4,6,7]. More comparisons on the real challenging heterophilic datasets suggested in [3] are recommended.\n\n\n\n[1] Müller L, Galkin M, Morris C, Rampášek L. Attending to Graph Transformers. Transactions on Machine Learning Research.\n\n[2] Less is More: on the Over-Globalizing Problem in Graph Transformers. InForty-first International Conference on Machine Learning.\n\n[3] The heterophilic graph learning handbook: Benchmarks, models, theoretical analysis, applications and challenges. arXiv preprint arXiv:2407.09618. 2024 Jul 12.\n\n[4] Revisiting heterophily for graph neural networks. Advances in neural information processing systems. 2022 Dec 6;35:1362-75.\n\n[5] When Do Graph Neural Networks Help with Node Classification? Investigating the Homophily Principle on Node Distinguishability. Advances in Neural Information Processing Systems. 2024 Feb 13;36.\n\n[6] Simplifying approach to node classification in graph neural networks[J]. Journal of Computational Science, 2022, 62: 101695.\n\n[7] Diverse message passing for attribute with heterophily[J]. Advances in Neural Information Processing Systems, 2021, 34: 4751-4763." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "originality: good\nquality: medium\nclarity: medium\nsignificance: medium" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a mechanism to mitigate over-smoothing and over-squashing in Graph Neural Networks (GNNs) by implementing an \"information flow control\" strategy that utilizes an \"information flow score.\" This approach allows for effective management of node embeddings across varied graph structures, demonstrating enhanced performance in large-scale graphs while maintaining computational efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. \"These long-range interactions (LRIs) are crucial for node classification tasks, as they help distinguish between different classes and improve classification accuracy\" This is not true. For example, graph transformers are good at capturing long-range node dependencies. However, they perform poorly on node classification tasks, especially on heterophilic graphs [1]. It is found that distant information is not always useful, and the over-globalization can cause performance degradation of graph models [2].\n2. \"over-smoothing is not only a topological phenomenon but is primarily a consequence of graph heterophily.\" There is no causal relation between over-smoothing and heterophily. As stated in [3], over-smoothing only happens in deep GNNs, but not in shallow GNNs. Heterophily will cause performance degradation to all GNN models, not matter they are deep or shallow." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Equation (1) is not the most general way of writing a 1-hop GNN aggregation, as there is no residual term. Namely, one would typically expect $\\phi$ to take two arguments i.e. $(\\mathbf{X}_u^t, \\bigoplus...)$\n- Line 159: The expression “embedding agnostic” is a little vague to me, so perhaps you can specify a little more clearly what you are implying here.\n- Line 285: What is a “homophilic GNN”?\n- The paragraph 283-291 uses too many vague words and is all but clear. For example, line 288-289, what would an “heterophilic graph condensation” be? \n- Line 330–331: How can removing edges that are bottlenecks necessarily reduce oversquashing? What if now you have disconnected components? This process can only work if one identifies correctly node labels, but this is something that your algorithm in general cannot know in advance." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "I think that it is valuable addressing issues like oversquashing and oversmoothing simultaneously, rather than studying them in isolation and independently of one another. I also liked the idea of leveraging \"moments\" from the feature distribution at different layers to guide the graph-filtering process." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a score based on node features aggregated in a GNN layer, that aims at capturing the likelihood of a node to be responsible for oversmoothing and oversquashing. By leveraging such score in a graph-filtering pipeline, the authors propose a framework to alter the graph connectivity within a GNN scheme." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are important aspects of the submission that require reworking. \n\n**Message and Presentation**\n\n- In the introduction, there is often some ambiguity in the way you mention oversmoothing and oversquashing, as if they were interchangeable concepts. This is not the case, and should be emphasized. Oversmoothing is a problem that occurs for *some* GNNs and is independent of the topology (as a phenomenon, not how quickly that occurs) and is somewhat orthogonal to long-range interactions since in the limit of many layers, node features become indistinguishable irrespectively of their distance. Oversquashing instead, is an issue that occurs for *all* 1-hop GNNs and is very much dependent on the topology (namely, their commute time) and hence affects long-range interactions, independent of the depth or the ability to capture local interactions.\n- Even more significantly, you keep overlapping the issue of oversmoothing with that of heterohily (for example Line 110, Line 121, Line 161 but this notion is repeated throughout the paper). This is wrong. While Definition 2.1 accounts for the labels, this to me represents more of a choice, as oversmoothing is the convergence of node features to the same representation over a connected component of the graph. As such, it is actually simply caused by low-frequencies dominating over high-frequencies in the graph spectrum. In fact, it can be mitigated or avoided by relying on architectures that do not operate via low-pass filters. I suspect that what you are implying here, is that oversmoothing becomes more of an issue in the presence of heterophily, as nodes with different labels become indistinguishable, but *this is a consequence of and not the cause of oversmoothing and should be rectified*.\n\n- Quite a few citations are missing in the related work, for example regarding rewiring [1,2,3] but also Graph-Transformers.. \n- The presentation of the framework is a little contrived (see my questions below). Also, while you try to distinguish yourself from graph-rewiring algorithms, your approach removes edges, and this is a key part of it. For this reason, I think it is a little misleading to distinguish yourself from graph rewiring techniques. You should be more specific, and mention that the rewiring is adaptive and based on GNN layer outputs more than topological connectivity measures.\n\n**Theory**\n\n- I am a little confused by Lemma 1. To me, the homophily of a node only depends on the label information and the topology and has nothing to do with the architecture being used and/or the features. This indeed seems to be reflected also in your Definition 2.2 where I am reading that $\\Phi$ can be taken to be the ground-truth label assignment. However, it seems that in Lemma 1 you are deriving the homophily of a node based on what can be mapped/separated from the node features, i.e. it has more to do with distinguishability from node features. If so, this should be clearly emphasized. As such, I would not really talk about homophily but node features separability.\n\n- I don’t think that Lemma 2 is an actual Lemma since your proof is essentially a discussion based on the results of Nguyen et al. You should remove the statement and replace it with a discussion based on what you have in the appendix. As it stands, I find it confusing and indeed informal, to a point that this is not a mathematical statement.\n\n- In light of my comments regarding Lemma 2, I don’t think that your score definition is that well motivated. More precisely, I can see why the denominator makes sense in relation to oversmoothing, since it measures node features separability after rounds of message passing (and *not* homophily), but I struggle to see how the numerator relates to oversquashing. You should expand on the “proof” from Lemma 2, which is not really a proof, to better motivate this score.\n\n**Experiments**\n\nEvaluation is not convincing. On all the benchmarks you used, it is highly debatable that long-range interactions are present at any level. In fact, I believe majority of people would argue that LRIs are not present on Cora, Pubmed, etc.. Additionally, datasets like Texas, Wisconsin, etc are known to have several issues and the community has proposed alternative options. I personally struggle to accept claims of “state of the art improvements by 1 %” on the likes of Cora and Pubmed this day. Graphs like Cornell, Texas and Wisconsin are also extremely small and super sensitive to tuning. The paper overall proposed a methodology, and as such, should be thoroughly tested on more relevant benchmarks. \n\n[1]: Mitchell Black, Zhengchao Wan, Amir Nayyeri, and Yusu Wang. Understanding oversquashing in\ngnns through the lens of effective resistance, ICML23.\n\n[2]: Adrián Arnaiz-Rodríguez, Ahmed Begga, Francisco Escolano, and Nuria Oliver. DiffWire: Inductive\nGraph Rewiring via the Lovász Bound, LOG 2022.\n\n[3]: Kedar Karhadkar, Pradeep Kr Banerjee, and Guido Montúfar. Fosr: First-order spectral rewiring for\naddressing oversquashing in gnns, 2022." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024deltagnn,\ntitle={Delta{GNN}: Graph Neural Network with Information Flow Control},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xMxHJxp192},\nnote={under review}\n}" }, "abstract": { "value": "Graph Neural Networks (GNNs) are popular machine learning models designed to process graph-structured data through recursive neighborhood aggregations in the message passing process. When applied to semi-supervised node classification, the message-passing enables GNNs to understand short-range spatial interactions, but also causes them to suffer from over-smoothing and over-squashing. These challenges hinder model expressiveness and prevent the use of deeper models to capture long-range node interactions (LRIs) within the graph. Popular solutions for LRIs detection are either too expensive to process large graphs due to high time complexity or fail to generalize across diverse graph structures. To address these limitations, we propose a mechanism called information flow control, which leverages a novel connectivity measure, called information flow score, to address over-smoothing and over-squashing with linear computational overhead, supported by theoretical evidence. Finally, to prove the efficacy of our methodology we design DeltaGNN, the first scalable and generalizable approach for long-range and short-range interaction detection. \nWe benchmark our model across 10 real-world datasets, including graphs with varying sizes, topologies, densities, and homophilic ratios, showing superior performance with limited computational complexity." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "deep learning", "neural network", "graph neural network", "topology", "homophily", "heterophily", "over-smoothing", "over-squashing", "long-range interactions" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e39f924a9248ec359830cc31dc0226f42b1f61c6.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "DeltaGNN: Graph Neural Network with Information Flow Control" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xN6z16agjE
Evaluating word representation for hypernymy relation: with focus on Arabic
main
Active
Word representation;hypernymy relation;hypernymy specific embedding;hypernymy detection.
applications to computer vision, audio, language, and other modalities
3;3;3
4;3;3
2;2;3
2;1;2
2;2;1
3
3.333333
2.333333
1.666667
1.666667
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please, correct the formatting of the paper, it is very hard to read in current condition." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The goal of the paper is easy to understand, as it provides valuable insight into which representations are best for hypernym-based tasks (none are best overall)\n\n2. Experiment design makes sense and is mostly without issues. There is a minor issue stemming from the limited resources available to the authors of the paper, but I will touch upon them in the weaknesses part of the review." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Authors try to evaluate different algorithms, which create hypernymy relation representations in Arabic. They select AraBERT corpus as a base for training all embedding models and train several models on this data. As a baseline for contextual embeddings, BERT is used, while for classic embeddings GloVe is used. For hypernymy-specific embedding LEAR, GLEN, Princare and Poincare Glove is used. After that, a simple feedforward models are trained for all embeddings for three tasks: hypernymy detection, hypernymy directionality detection and semantic relation classification. Results show, that Poincare GloVe performs best on hypernomy detection and hypernomy directionality detection tasks. In semantic relation classification tasks Poincare GloVe performs worse. Overall, there is no best representation for all tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Authors are very constrained in resources, having to resort to halving the size of the training dataset for some of the algorithms. This raises questions to the validity of the collected information, since Poincare GloVe, the best algorithm in Hypernymy Directionality and Hypernymy Detection tasks, has seen only half as many data samples, which can possibly make the results non-representative. However, due to the simplicity of the Poincare GloVe, most likely it won’t impact the results as much, thus, making this just a minor issue.\n\n2. The quality of the text's presentation is poor; it contains numerous typos and improperly formatted tables. Table 7 has incorrectly formatted items in header, Table 8 has incorrectly formatted dataset names, in table 7 the highest F1 score for ASRD dataset is incorrectly attributed to 100D Poincare GloVe, which has the score of 0.88 instead of Poincare Embedding, which have the score of 0.89. On the line 070 BERT has no citation available, on the line 220 the sentence starts from lowercase, on the line 291 the word Assess is incorrectly capitalized, line 480 is cut in half, etc. Both Introduction and Related Work sections are hard to read, since they are written in a big wall of text instead of separate paragraphs on groups of algorithms. Some of the citation years are in brackets (lines 065-068), some are not (line 079)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How do the results of your study compare to existing models in other languages, especially in terms of applicability to Arabic?\n3. What are the advantages of your study over previous studies? Is a new methodology or a new dataset proposed?\n### Additional feedback\n- Consider revising the introduction to better outline the motivation for the study and its significance in the broader context of NLP research.\n- It may be helpful to include more hypernym examples in Arabic throughout the paper to illustrate your points." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. **Relevance**: Focusing on hypernym in Arabic is timely and relevant, and it addresses a less explored area of NLP.\n2. **Experiments**: The paper presents a comprehensive experimental evaluation of multiple word representation techniques, demonstrating a solid methodological framework.\n3. **New findings**: The findings presented in the paper provide new insights into how Arabic word embeddings can be enhanced to better detect hypernym." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on evaluating and improving word vector representations specialized for hypernym relations in Arabic. Your research captures the gaps in hypernym aspects of performance by conducting multiple sets of experiments on different datasets, and this aspect of the research is important for the NLP task." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Literature Review**: Although the paper discusses related work, a more comprehensive literature review would have positioned the paper's contribution more effectively. I suggest the authors report precision and recall specifically for elements that have both spatial and logical relationships, compared to those with only one type of relationship.\n2. **Clarity**: Some sections may lack clarity, particularly in explaining the significance of the research methodology and findings. \n3. **Results Interpretation**: The results section may need to discuss the significance of the findings in more depth, especially in the context of existing models." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "There are no specific questions here. But I suggest that the authors might consider proposing a novel method or modification in word representation tailored to hypernymy in Arabic instead of solely focusing on evaluation. In addition, this work may be better suited for a specialized venue, such as an NLP workshop focused on the Arabic language. This could provide a more appropriate audience that appreciates the linguistic specificity." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Word representations for hypernymy are essential for a variety of tasks in NLP and information extraction.\n\n2. By concentrating on the Arabic language, the paper contributes to a less explored area, providing insights for non-English NLP research.\n\n3. The research conducts an evaluation of multiple types of embeddings, including traditional, hypernymy-specific and contextual embeddings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the challenge of modeling hypernymy relations specifically focusing on Arabic. The authors evaluate various word representation methods to determine the most effective for Arabic hypernymy tasks. They compare traditional embeddings, hypernymy-specific embeddings, and contextual embeddings using an Arabic corpus and multiple datasets to assess their impact on tasks such as hypernymy detection." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper primarily focuses on evaluating existing word representations rather than introducing a novel approach or method for hypernymy modeling. The novelty is very limited.\n\n2. The written quality of this paper is poor. The authors should carefully revise this paper for better presentations. For example, the citation format is incorrect.\n\n3. The paper seems to provide an evaluation of performance effects without a deep analysis of why certain embeddings perform better or worse in specific contexts or tasks.\n\n4. The impact of this work in the ICLR community is limited. Maybe an NLP workshop on the Arabic language is more suitable." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Evaluating different types of word embedding for modeling hypernymy relation" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024evaluating,\ntitle={Evaluating word representation for hypernymy relation: with focus on Arabic},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xN6z16agjE},\nnote={under review}\n}" }, "abstract": { "value": "Hypernymy relation is one of the fundamental relations for many natural language processing and information extraction tasks. A key component of the performance of any hypernymy-related task is word representation. Traditional word embeddings capture word similarity but fall short of representing more complex lexical-semantic relationships between terms, such as hypernymy. To overcome this, recent studies have proposed hypernymy-specific representations. In this study, we conduct an evaluation of several types of word representations to determine the most effective approach for modeling hypernymy relationships in Arabic. We use an Arabic training corpus and several datasets to assess traditional embedding, hypernymy-specific embedding, and contextual embedding across several hypernymy-related tasks, including hypernymy detection. The results indicate that different embeddings have different effects on the performance. Moreover, the performance is affected by the selected datasets. This highlights that there is a need for further research to develop more robust word representation and benchmark datasets." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Word representation", "hypernymy relation", "hypernymy specific embedding", "hypernymy detection." ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/55acc670c2c4b632099e99b36f7e87c11e2bf3e9.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Evaluating word representation for hypernymy relation: with focus on Arabic" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xNCDKQMPYD
GPT4LoRA: Optimizing LoRA Combination via MLLM Self-Reflection
main
Active
MLLM;Self-Reflection;LoRA Combination
foundation or frontier models, including LLMs
3;3;3;5
4;4;3;3
1;2;2;3
2;2;2;2
1;2;2;3
3.5
3.5
2
2
2
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- What happens when the number of LORA weights is increased from three?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper presents a training-free method of generating linear combination of merging multiple LORA trained weights, utilizing GPT4o.\n- The paper shows qualitatively that these approaches offer a better and cheaper method to controlling the characteristics of the generated image." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a training free method to combine multiple LoRA trained weights to generate better aligned images with text prompts in Text2Image setup. Building on top of LORA-merge, where multiple LORA trained weights are linearly combined, this paper utilizes a multimodal LLM to directly generate the combination weights. Furthermore, the approach utilizes the same multimodal LLM to refine its generations using feedback and self refinement." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper is low on contributions - the method, while interesting, probably doesn't warrant a full paper - its better suited for a blog post. \n- The paper is weak on quantitative results - Table 2 results does not appear statistically significant.\n- The paper lacks analysis on the GPT4o outputs of the linear combinations. \n- Experiments only conducted on closed source GPT4o, it is unclear if this kind of approach works for open source models, thereby limiting the applicability." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See the Weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "(1)GPT4LoRA’s use of MLLM self-reflection introduces a new paradigm for training-free LoRA model combinations.\n\n(2)Extensive experiments demonstrate superior performance compared to baseline methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces GPT4LoRA, a method that leverages the self-reflection capabilities of multimodal large language models (MLLMs) to enhance Low-Rank Adaptation (LoRA) model combinations for generative tasks. Traditional LoRA combination approaches often require additional fine-tuning or changes to model architecture, which GPT4LoRA addresses through a training-free, three-step process: Generate, Feedback, and Refine. Extensive experiments conducted on the realistic and anime-style datasets show that GPT4LoRA outperforms existing methods in both quantitative and qualitative evaluations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(1) The evaluation benchmark used in this paper is unclear. The paper mentions “Extensive experiments conducted on a benchmark of widely-used LoRA models” in lines 83 and 482, but lacks citations, leaving it unclear which text-to-image evaluation dataset is used.\n\n(2) The paper lacks comparisons with comparable methods, such as ZipLoRA and LoRA Composite. ZipLoRA used DreamBooth and StyleDrop as evaluation datasets—could authors evaluate the GPT4LoRA on these datasets and choose ZipLoRA as the strong baseline model?\n\n(3) The method's reliance on the self-reflection capabilities of multimodal large language models (MLLMs) like GPT-4 may result in variable outcomes depending on the MLLM's quality and adaptability, potentially limiting robustness across different models.\n\n(4) While few-shot sample selection is critical to GPT4LoRA's success, details about this process are sparse, and the choice of demonstration samples significantly impacts performance, which may make it challenging for other researchers to reproduce the results effectively.\n\n(5) A minor issue: some references in the paper are improperly formatted. For example, in line 144, \"Reflexion Shinn et al. (2024) converts,\" and in line 214, \"Unlike previous methods Lee et al. (2024); Xu et al. (2023).\"\n\n(6) Some results in the paper show limited improvements; it’s recommended that the authors conduct significance tests to analyze these improvements." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Since [1] provides a testbed with 480 composition sets, why did the authors create a new benchmark with only 105 composition sets? The new benchmark omits the \"object\" category and is otherwise identical.\n\n2. Why are there 105 composition sets for 24 LoRAs? Based on supplementary material Table 1, anime-style compositions should yield 3&times;3&times;(3+2)=45 sets, and realistic-style should yield 4&times;4&times;(3+2)=80, for a total of 125 sets.\n\n3. Given that GPT models exhibit position bias when used as evaluators, did the authors average scores by switching the image positions in comparative evaluations?\n\n4. The paper states, \"We also provide the experimental results of combining two LoRA models (including ZipLoRA (Shah et al., 2023)) in the supplementary material,\" (Lines 307-308) but the supplementary material appears to lack any additional experiments.\n\n5. In Figure 3, why do point-wise evaluation scores vary significantly when switching baseline models? For example, GPT4LoRA's score is around 7.0 when compared with LoRA Merge, but exceeds 9 when compared with LoRA Composite.\n\n6. If there are 105 composition sets, and MLLM-based evaluation is repeated 10 times with 3 random seeds per image, this should result in over 3K comparative evaluations. Why do the win rates in Figure 3 show only a single decimal place?\n\nIf I have misunderstood any of these weaknesses or questions, please feel free to correct me." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The authors integrate MLLMs and diffusion models, proposing a new approach to optimize LoRA composition.\n2. They construct a testbed with 24 LoRA models based on SDXL.\n3. Experimentally, GPT4LoRA provides some improvement over existing LoRA composition methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces GPT4LoRA, a method designed to optimize the combination of LoRA models for generative image synthesis using the self-reflection capabilities of MLLMs. GPT4LoRA is a training-free framework with three-step process: Generate, Feedback, and Refine. This iterative framework leverages MLLMs to adjust coefficient weights without modifying the underlying model architecture​.\n\nThe paper evaluates GPT4LoRA against existing methods using a combination of quantitative metrics and GPT-4o-based assessments. The experiments focus on maintaining alignment between generated images and textual prompts across realistic and anime styles. The authors claim that GPT4LoRA achieves improvements over baseline methods in both composition quality and image coherence." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Major Issues:**\n\n1. **Efficiency Concerns:** GPT4LoRA’s efficiency is a significant concern. To combine multiple LoRAs into a single image, the proposed framework requires GPT-4o to first generate coefficients based on few-shot samples, then use SDXL to generate multiple images, followed by GPT-4o generating feedback to refine textual prompts and coefficients. Additionally, this process is iterated $N$ times. In sum, generating a single image can potentially require prompting GPT-4o over a dozen times and generating dozens of candidate images with SDXL. Given the relatively marginal improvements seen in qualitative and quantitative experiments, this substantial increase in computational cost may be difficult to justify.\n\n2. **Limited Experiments:** Compared to LoRA Switch and LoRA Composite, GPT4LoRA's experiments are relatively limited. Specifically, this paper evaluates only 3-LoRA combinations with 105 composition sets, while previous works have explored 2-5 LoRA combinations and included more composition sets. Prior studies also incorporated human evaluation, which this work lacks. Analytical experiments are also sparse. For example, only an ablation on few-shot samples is provided, while there is no analysis of each step or the number of iterations in the framework. Since each step and additional iteration increases significant computational cost, analyzing these aspects would be valuable.\n\n3. **Inconsistent Claims:** The authors' motivations appear unsupported due to inconsistencies. In the Introduction, the authors state that prior work suffers from (1) being \"computationally costly and impractical when a large number of LoRA models are involved\" (Lines 58-59). However, as discussed above, GPT4LoRA’s computation demands are notably higher than existing methods. They also claim (2) \"A fundamental limitation of these methods lies in the subjectivity and unreliability of the evaluation process for image quality\" (Lines 59-69), yet the MLLM self-reflection and GPT-based evaluation used here are established in prior work. Thus, the motivations behind GPT4LoRA seem unsubstantiated.\n\n**Minor Issues:**\n\n1. **Overlap with Prior Work:** Parts of this paper closely resemble the previous study [1]. For example, the entire \"Diffusion Models\" paragraph in 3.1, as well as the first two paragraphs in \"LoRA Combination,\" show only minor paraphrasing of prior work. Besides, Table 1 is almost the same as Table 1 in [1] (but there is no mention of evaluation criteria and format requirements). In other words, the GPT-based evaluation approach (comparative evaluation, two evaluation dimensions, point-wise scoring, win rate, and evaluation prompts) is identical to [1] but lacks explicit citation and description.\n\n2. **Lack of Detail:** Several key details are missing. See Questions 1-4 below.\n\n3. **Unexplained Results:** Certain results seem confusing and lack analysis. See Questions 5-6 below.\n\n**Typos:**\n\n1. The legend and x-axis in the point-wise part of Figure 3 do not match.\n\n**References:**\n\n[1] Zhong et al. Multi-LoRA Composition for Image Generation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. In lines 228-229, have you experimented with other choices of $k$ instead of $5$? How is the number $5$ determined?\n2. The authors should provide implementation details of LoRA composite and LoRA switch. In their original paper, the backbone diffusion model and image resolution are quite different from GPT4LoRA. The details could help readers understand and to ensure the fairness of comparison.\n3. If I understand it correctly, the MLLM used is GPT-4o and the same model is used for evaluation. I wonder if there exists any biases towards the evaluation." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. This paper researches an interesting problem of composing multiple LoRAs for image customization.\n2. The proposed framework of leveraging MLLMs to provide feedback in the process is intuitive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores using a self-reflection mechanism for LoRA combination. Specifically, it follows a \"generation, feedback, and refine\" paradigm, where MLLMs are leveraged to provide feedback to the previous round of generation with a set of carefully selected demonstrations as prompts. The process is iterated for several rounds until the best weight of the combination is achieved. Experiments on a newly proposed benchmark by the authors verified the effectiveness of the methods, in terms of GPT-4o evaluation and CLIP scores." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper presentation needs significant improvement. Specifically, the top two figures of Figure 3 contain inconsistent information. Labels in the x-axis do not align with the legends. Legend in the bottom two figures of Figure 3 block the third bars. The authors should also be cautious of citation format, e.g., lines 305-306, line 215.\n2. The results in Table 2 are not significant enough compared with LoRA-Composite in terms of CLIP scores. A significant test would be helpful.\n3. Results in Table 3 show that without few-shot demonstrations, the performance is seriously downgraded, even inferior to LoRA merge. This makes me doubt the actual effectiveness brought by the \"generate, feedback, and refine\" pipeline.\n4. Some details are missing for experiments. See questions below." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024gptlora,\ntitle={{GPT}4Lo{RA}: Optimizing Lo{RA} Combination via {MLLM} Self-Reflection},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xNCDKQMPYD},\nnote={under review}\n}" }, "abstract": { "value": "Low-Rank Adaptation (LoRA) is extensively used in generative models to enable concept-driven personalization, such as rendering specific characters or adopting unique styles. Although recent approaches have explored LoRA combination to integrate diverse concepts, they often require further fine-tuning or modifications to the generative model's original architecture. To address these limitations, we introduce GPT4LoRA, a novel method for LoRA combination that adjusts combination coefficients by leveraging the self-reflection capabilities of multimodal large language models (MLLMs). GPT4LoRA operates through a three-step process—Generate, Feedback, and Refine—without the need for additional training, relying solely on tailored prompts and iterative refinement to enhance performance. This iterative approach ensures more constructive feedback and optimizes the model responses. Experiments on various LoRA model combinations, including both realistic and anime styles, demonstrate that GPT4LoRA achieves superior results compared to existing methods. Additionally, an evaluation framework based on GPT-4o further highlights the clear performance gains offered by GPT4LoRA over standard baselines, showcasing its potential for advancing the field." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "MLLM", "Self-Reflection", "LoRA Combination" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/ec6570d5cb78af3b76c228591e7089b6d7f6bdbc.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/95693cd34f6bad5039063a0e7fdd04d6c2eb0e4b.pdf" }, "title": { "value": "GPT4LoRA: Optimizing LoRA Combination via MLLM Self-Reflection" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xNDydjYBmC
Enhancing PPB Affinity Prediction through Data Integration and Feature Alignment: Approaching Structural Model Performance with Sequences
main
Active
binding affinity;geometric deep learning;virtual screening
applications to physical sciences (physics, chemistry, biology, etc.)
3;3;3;6;6
5;3;3;3;4
2;1;2;3;3
2;2;2;3;3
2;3;2;2;3
4.2
3.6
2.2
2.4
2.4
-0.102062
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In Section 3.1, what is the reference paper for PPB-Affinity Dataset? \n\nIn Section 4.1, you set the distance threshold for identifying protein-binding interface amino acids as 8 Å between the C-alpha atoms of two amino acids. This choice may seem somewhat arbitrary; could you elaborate further on the rationale behind selecting 8 Å as the threshold?\n\nCould you explain more about the data partition process, and why it can help to solve the data leakage problem?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The use of monotonic neural network-constrained multi-task learning (MMTL) expanded the development dataset to over 23,000 samples and helped to improve the model’s generalization abilities.\n\nSE(3)-Invariant attention is used to get features of protein complex structures using the iDist algorithm, and then clustering the protein complex structure features based on graph partition algorithms helps to address the data leakage problem." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, PPBind-3D and PPBind-1D are developed to predict protein-protein binding affinity based on three datasets PPB-Affinity dataset, Heterogeneous Affinity Dataset and DIPS-Plus dataset. PPBind-3D used SE(3)-Invariant attention module to capture structural information near the protein-protein binding interface to make its predictions. PPBind-1D was developed using sequence data to address the lack of structural data in practical applications." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Currently, the code doesn’t contain a data partition process.\n\nThe paper would be better if including other methods to compare their performance with PPBind-3D.\n\nThe metrics used to estimate performance only include spearman or Pearson correlation, lack of RMSE." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Since the proposed methods are meant to be applied to virtual screening, what is the efficiency of them? For example, the inference speed and memory consumption." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper introduces a novel approach to protein-protein binding (PPB) affinity prediction, integrating both structural and sequence-based models to address the high-throughput demands of drug discovery. The models, PPBind-3D and PPBind-1D, are designed with a sequence-structure alignment strategy that allows the sequence-only model to gain structural insights indirectly. This innovation effectively bridges the gap where structural data is unavailable. Besides, the authors use a monotonic neural network-based multi-task learning (MMTL) framework to incorporate heterogeneous affinity data, enhancing the model’s robustness while handling variations in measurement types. The authors also pay attention to data partitioning to avoid data leakage. These methodological choices are evaluated by ablation studies and real-world virtual screening case studies.\n\nThe clarity of the presentation is overall good to let readers understand the proposed models and the experiments. In terms of impact, this paper addresses a critical challenge in high-throughput screening by providing a flexible solution that has both structural and sequence-based models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work introduces two AI models called PPBind-3D and PPBind-1D to enhance protein-protein binding (PPB) affinity prediction, which is crucial for protein drug development. In detail, PPBind-3D leverages structural data near binding interfaces, supported by a novel monotonic neural network-based multi-task learning (MMTL) approach, which integrates diverse experimental datasets to improve generalization. Besides, PPBind-1D uses sequence-based data, aligning with structural predictions to address scenarios when structural data is limited. In the experiments, the authors demonstrate the models' potential to support high-throughput virtual screening of PPB affinities by illustrating three case studies in virtual screening applications." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The baseline models are missing in experiments, so it is unknown how well the proposed models perform when compared to existing ones. \n\nWhen partitioning the data, the authors only provide the partition performance according to distances. But it is hard to understand what distance level is good or not. I feel that using the protein sequence identity ratio between different proteins can be more straightforward.\n\nFor the results of the three cases in virtual screening applications, there can be data leakage between the test set and the training set. It would be interesting to know if the well-predicted structures/sequence data exist in the training set or share high similarities with the data in the training set." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Can the authors elaborate on the performance of PPBind-1D and PPBind-3D across additional external datasets? It would be helpful to understand how well these models generalize to datasets beyond PPB-Affinity and DMS-Het, especially for applications with different data types or measurement techniques.\n\n\nAdding an analysis on which structural or sequence features most impact the predictions would provide valuable insights. Are there any interpretability tools (e.g., SHAP values or feature importance rankings) applied to understand how features contribute to binding affinity predictions? This could increase the model’s practical applicability and user trust.\n\nCan the authors provide more details on the computational resources required for PPBind-3D versus PPBind-1D? A comparison in training and inference time, along with any scalability insights, would help evaluate the model’s applicability in real-world, high-throughput scenarios." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The methodology is strong, incorporating strict data partitioning and monotonic multi-task learning to enhance model generalization. While the technical explanations are clear, some sections could benefit from simplification for accessibility. This work’s flexible, scalable model has significant implications for drug discovery, offering a valuable tool for high-throughput screening relevant to both computational biology and AI communities." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a new approach for predicting protein-protein binding (PPB) affinity, which is essential in drug discovery. The authors developed two models, PPBind-3D and PPBind-1D. \n\nPPBind-3D leverages structural data to predict affinity using advanced data integration and a multi-task learning approach, which enables it to generalize well despite data variability. PPBind-1D, on the other hand, relies on sequence data alone, making it more applicable when structural data is unavailable. \n\nTo align PPBind-1D's performance with that of PPBind-3D, the authors introduced an alignment technique using additional unlabeled data, helping the sequence-based model approximate structural model performance. Evaluations show that these models, particularly PPBind-1D, can support high-throughput screening by predicting PPB affinity accurately, even under strict data partitioning to avoid leakage. The work’s impact lies in enhancing drug discovery workflows with a method that bridges data gaps while maintaining predictive accuracy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The alignment method for integrating sequence-based features with structure-based predictions is intriguing but not fully detailed. Providing more in-depth explanations and visualizations, especially of how alignment influences the latent spaces between PPBind-1D and PPBind-3D, would strengthen understanding and reproducibility.\n\n\nWhile the paper includes ablation studies, adding more direct comparisons with existing models (e.g., CSM-AB, AREA-AFFINITY) using standard benchmarks would clarify the novelty and effectiveness of the proposed approach. Highlighting quantitative gains over established models would emphasize the advantages of PPBind-1D and PPBind-3D.\n\n\nA brief analysis of feature importance or interpretability of predictions, particularly around how sequence and structural features affect affinity, would make the work more useful for practical applications and provide valuable insights into model behavior." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No ethics review needed." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What is the SOTA performance on the PPB task and on the datasets used, such as PPB-Affinity?\n2. Why specifically choose iDist and K-nearest neighbors methods? Although PDB code-based and time-based splitting methods are insufficient, why not consider sequence similarity-based splitting methods, for example? \n3. As the authors mentioned in Section 3, the lack of strict data splitting is problematic for validating models. Why do the authors use models trained on randomly split data in Section 5.2? How can model performance be validated when the data splitting process is not strict? Will there be a significant drop in model performance in the virtual screening scenario introduced in Section 5.2 when using strictly split datasets?\n4. Why was contrastive learning not selected for feature alignment?\n5. I'm unsure whether the log₂ enrichment ratio qualifies as an affinity measurement." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The structure of the article is clear and easy to follow. The figures are well-designed. The authors use biological measurement terminology correctly, such as subscripting characters where appropriate." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work addresses the task of predicting protein-protein binding (PPB) affinity to improve the efficiency of high-throughput screening in protein drug development. The motivation behind this work is to overcome limitations associated with traditional laboratory screening methods for PPB affinity, which are costly, time-consuming, and not well-suited for high-throughput applications. Additionally, existing deep learning models often lack sufficient high-quality data or generalization capability due to limited compatibility with diverse affinity data. To accomplish this, the authors developed two AI models, PPBind-3D and PPBind-1D. In this process, they focused on (1) utilizing a novel and large dataset, (2) strictly partitioning data for performance testing, and (3) introducing a \"feature alignment\" mechanism. The authors demonstrated the performance of their models using the PPB-Affinity dataset and three virtual screening cases." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Lack of Baselines: Section 2, \"Related Work,\" mentions several existing studies on this task. However, **there is no comparison between the proposed models and previous models**, aside from the comparisons within the authors' own models (1D, 3D, and aligned).\n2. Missing Important Figure: **Figure 5 is an exact copy of Figure 4**. While this is likely unintentional, the absence of additional descriptions or tables showing the performance of the 1D model is a significant issue, as the case studies alone cannot demonstrate the model's general performance.\n3. Presentation Weaknesses: Tables would be more suitable for displaying model performance, especially for conference papers. Additionally, the title is overly long and lacks focus. For readers unfamiliar with the subject, the abbreviation \"PPB\" may be confusing; using it in an already lengthy title is somewhat counterintuitive. In Figures 7 and 8, the x-axis ranges in panels A, B, and C are inconsistent, making it difficult to identify trends. There is also a typo in line 413: \"affiity.\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Did the authors check for potential overlap (e.g. with iDist) between DIPS-plus and PPBAffinity dataset? If there is some overlap, this might explain the success of the alignment procedure.\n\n2. The authors are using DIPS-Plus dataset, which has already been shown to contain many near-duplicates [1]. I suggest using datasets which improved on this issue, are bigger and of higher quality, such as PPIRef [1] or PINDER [2].\n\n3. What is the purpose of Figure 2? It is not well described and it is not sure whether it conveys some important information. Consider removing the figure or explaining its purpose.\n\n4. Line 178-179 \"Euclidean distances between each fold of data were calculated as shown.\" Where is it shown? Please explain in detail (SuppMat can be used).\n\n5. Is the geometric encoder (equations 1-5) completely novel? Did the authors draw inspiration from some existing work? Relevant work should be cited in case inspiration was taken from the literature. If the method is novel, it deserves more attention and it should be discussed in more detail to provide intuition for the equations.\n\n6. I am not sure how advanced is the benchmarking for the dG prediction task, if it is hard to benchmark on that task, the authors could consider using PPI datasets such as PPIRef or PINDER and use the number of contacts between proteins as a proxy for binding affinity [3].\n\nReferences:\n\n[1] Bushuiev, A., et al. (2024). Learning to design protein-protein interactions with enhanced generalization, ICLR 2024\n\n[2] Kovtun et al. (2024) PINDER: The protein interaction dataset and evaluation resource, bioRxiv 2024.07.17.603980; doi: https://doi.org/10.1101/2024.07.17.603980\n\n[3] Anna Vangone Alexandre MJJ Bonvin (2015) Contacts-based prediction of binding affinity in protein–protein complexes eLife 4:e07454.\nhttps://doi.org/10.7554/eLife.07454" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The idea with the alignment of the structure-based and the sequence-based model is interesting.\n\n2. The focus on dealing with the data leakage issue in PPI datasets is appreciated. Figure 4 illustrates the problem with data splitting well." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces two new models for the prediction of protein-protein binding affinity (dG) based on protein sequence or protein structure respectively. The data leakage issue for protein-protein interaction datasets is discussed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. No benchmarking against other methods. In Related work, there is paragraph listing several methods for dG prediction such as dG-Affinity, PPI-Affinity and AREA-Affinity but none of the tools is benchmarked against in the paper. The authors should (i) make sure the related work is up to date and there are no more recent methods for the PPB affinity prediction task and (ii) authors should compare their methods against SOTA.\n\n2. The \"monotonic neural network-constrained multi-task learning (MMRL)\" method is not clear at all. What is the operator $M_{\\theta_t}$? It just says it as a monotonic neural network. What is the architecture? What are its parameters trained on? Is it trained together with the task for which equation (6) is used as the objective? Can the authors comment on what are the implications of cooptimizing the task and its learning objective?\n\n\n3. The claimed novelty in partitioning the dataset with iDist is not novel, it has already been done by the authors of iDist [1]. If the authors want to claim novelty, they should explain what is the novelty with respect to [1].\n\n4. Significant part of results for PPBind-1D is missing because the Figure 5 is a duplicate of Figure 4. Please fix." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024enhancing,\ntitle={Enhancing {PPB} Affinity Prediction through Data Integration and Feature Alignment: Approaching Structural Model Performance with Sequences},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xNDydjYBmC},\nnote={under review}\n}" }, "abstract": { "value": "One key step of protein drug development is the screening of protein-protein binding (PPB) affinity. The current mainstream screening method of PPB affinity is laboratory experiments, which are costly and time-consuming, making it difficult to quickly perform high-throughput screening. Various deep learning methods have been proposed to predict PPB affinity, but they are often limited by the availability of high-quality data and the compatibility of the algorithms with that data. In this work, we developed two AI models, PPBind-3D and PPBind-1D, to predict PPB affinity. PPBind-3D leverages structural information near the protein-protein binding interface to make its predictions. By employing monotonic neural network constrained multi-task learning, we effectively utilized heterogeneous affinity data from diverse wet lab experiments to expand the development dataset to over 23,000 samples, thereby enhancing the model's generalization capabilities. Additionally, PPBind-1D was developed using sequence data to address the lack of structural data in practical applications. During the training of PPBind-1D, we aligned it with PPBind-3D by incorporating an additional 42,108 no-affinity-label samples through an alignment approach. Finally, we demonstrated three application cases of our AI models in the virtual screening of protein drugs, illustrating that our models can significantly facilitate high-throughput screening." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "binding affinity", "geometric deep learning", "virtual screening" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d9ff7250849ae947a7ecc6e3b33dc548118173d8.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Enhancing PPB Affinity Prediction through Data Integration and Feature Alignment: Approaching Structural Model Performance with Sequences" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xNaPs8bdLa
Aggregation of Multi Diffusion Models for Enhancing Learned Representations
main
Active
Diffusion Models;Conditional Generation
generative models
1;3;3;3;5;5;6
5;4;4;3;5;3;4
3;2;2;3;3;2;3
3;2;1;3;3;2;3
3;2;1;3;3;3;3
3.714286
4
2.571429
2.428571
2.571429
-0.239535
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "Potential breach of double-blind review: \n\n\nThe GitHub link in the paper https://github.com/Hammour-steak/AMDM is from a personal repository: https://github.com/Hammour-steak." }, "flag_for_ethics_review": { "value": [ "Yes, Other reasons (please specify below)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Is AMDM applied solely during inference with pre-trained models, or do you simultaneously train the diffusion models alongside the aggregation process?\n- Have you experimented with non-linear interpolation techniques, such as splines, rather than linear interpolation? Understanding how different interpolation methods might impact downstream performance would be insightful.\n- Has AMDM been evaluated on tasks where achieving fine-grained control is especially challenging, such as aggregating models with contrasting styles (e.g., realism and abstract)?\n- It would be valuable to visualize or analyze the learned weighting factors and the types of weighting factors learned during aggregation, as this could offer insights into the underlying structure of the intermediate variable space." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- New insights where the diffusion model fails to capture certain aspects of features\n- Good performance in the downstream benchmarking" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces the Aggregation of Multi Diffusion Models (AMDM), an algorithm to enhance fine-grained control in image generation by combining features from multiple diffusion models. It leverages two main techniques: spherical aggregation, which merges intermediate features with minimal manifold deviation, and manifold optimization, which refines these variables to align with the intermediate data manifold." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- A computational complexity comparison is needed during the inference to evaluate the algorithm's associated tradeoff, such as inference time.\n- Integrating multiple models might introduce unanticipated effects, especially when merging highly distinct models. Although manifold optimization aims to correct deviations, there’s little discussion on potential artifacts in complex or real-world scenarios." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This submission does not comply with the double-blind review policy since the provided code repo is non-anonymous." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This submission does not comply with the double-blind review policy since the provided code repo is non-anonymous." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This submission does not comply with the double-blind review policy since the provided code repo is non-anonymous." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* The abstract says \"Code is available at: https://github.com/Hammour-steak/AMDM \" which seems to violate the anonymous policy.\n\n* The paper seems to only present experiments on integrating into InteractDiffusion. Do the results hold for other models / combinations? \n\n* The intro states a number of nice issues \"Generating multiple objects with overlapping bounding boxes can lead to attribute leakage, where one object’s description inappropriately influences others, causing inconsistencies between objects and the background. Fine-grained interaction details may be illogical, and style integration may compromise object attributes.\" However, the paper does not seem to discuss these issues and whether AMDM can help solve them.\n\n* I didn't quite follow where in the paper this contribution is discussed/justified: \"Our algorithm and experiments reveal some unique properties of diffusion models: Diffusion models with a shared theoretical foundation possess the same mathematical essence,\neven if they differ in architecture, allowing operations on their intermediate variables; Furthermore, diffusion models initially focus on the generation of features such as position, attributes, and style, while later stages emphasize generation quality and consistency.\" But I would be interested in understanding this better, as it seems like a deep contribution. Specifically, can you give detailed explanations or experimental evidence supporting these claims about the properties of diffusion models, and clarification on how these properties are demonstrated through the AMDM algorithm and experiments." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The authors show the benefits of combining multiple diffusion models, as a way to get fine-grained control over multiple aspects. This is an interesting approach that could open up a lot of use cases for diffusion models, without needing to re-train or to train custom adapters for each style, attribute, etc. \n\nThe paper provides a theoretical justification of how their method works by analyzing the diffusion process. The authors are very clear about the technical details of their Spherical Aggregation and Manifold Optimization steps.\n\nThe authors provide both qualitative and quantitative evidence for the applicability and success of their method. For example, results on the COCO-MIG benchmark show improvements in several metrics." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies combining multiple diffusion models, using different features from each, to form one final model. The goal is to get more fine-grained control, overcoming limitations of existing guidance methods. The new algorithm Aggregation of Multi Diffusion Models (AMDM) consists of two key components: spherical aggregation and manifold optimization. The authors show that AMDM can improve fine-grained control and that they can use conditional diffusion models for specific aspects while aggregating the models. This avoids the need for custom datasets for each aspect the user may want to control." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The authors focus mostly on COCO-style captions and images. Hence, the evaluation is mostly on short captions and photo-realistic images of everyday life. However, many use cases of guidance go beyond these domains. It is not clear if this method can help with generating designs, logos, text, other types of digital art, etc. The study would be more convincing if the authors could evaluate at least 1-2 other domains.\n\n* The paper is light on baselines. For example, Table 1 only shows the authors' algorithms. How do the results compare to the several models mentioned in the related work, e.g., \"several studies have attempted to achieve fine-grained control (Huang et al., 2023a;\nHan et al., 2023; Smith et al., 2023; Gu et al., 2024; Kumari et al., 2023).....\" I understand that these methods may require training and/or architecture changes. But it would be good to know how AMDM compares to other approaches. Concretely, Layoutdm, Deadiff and Animatediff should be included for comparison / combination with AMDM. Or explain why direct comparisons to these methods may or may not be appropriate given the different approaches (e.g., training requirements, architectural differences).\n\n* Typo: In table 1, it says \"InteractiDiffusion\" instead of \"InteractDiffusion\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Could you provide comparisons with score function ensembling? \n2. How does the method scale to more than 3 diffusion models? Is there a limit at wich point intermediary samples are averaged too much and the backward process fails at producing a good quality image?\n3. Does the added \"robustness\" of having several models allow for a lower amount of diffusion steps?\n4. How important is the hyperparameter $s$? Does going all the way with $s=T$ improves the generated images? What is the minimal amout of \"merging\" compared to just following a single diffusion model?\n5. Your method works on merging intermediary samples of the diffusion process. I guess the underlying architecture of the different diffusion models can be very different (apart from the shared latent encoder). Do the different diffusion models you use share an architecture? If yes, do you think model merging (where weights of different versions of the same model are interpolated together) would work?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* To my knowledge the method and idea are novel. They provide a cheap method to improve diffusion model sampling without any retraining step.\n* Visually, the method seems to be improving both prompt adherence and image quality.\n* There is an appropriate amount of theory derivation to explain the different steps of the algorithm." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduced a new method to aggregate different diffusion models in order to increase the generated image quality and the adherence to prompt and instruction (such as localization). This new algorithms spherically interpolates intermediary samples of the diffusion process from different diffusion models up until a chosen timestep (at which point semantic information is often already set and will not change during the rest of the diffusion process). The authors present some theoretical elements about rectifications made after spherically interpolating images. Final images look of higher quality than original ones while also better following conditionning elements." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The paper is very light in quantitative metrics about image quality. The only two metrics seem to be about prompt adherence but there is nothing about image quality.\n* The paper lacks some comparison with existing methods (e.g diffusion model ensembling)\n* Compared to model merging, this method stills incurr an additional computational overhead when sampling with 2 models (twice the amount of Neural Function Evaluation)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Can you clarify the statement in Lines 280-282 about the manifold optimized point that should lie in the intersection?\n\n2. It seems that one of the diffusion models has to be chosen as theta_1. How does the performance change if we change the choice of theta_1 in the experiments? Given an arbitrary set of diffusion models, how should the user choose theta_1?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper proposes a method for effectively mixing the generation process of two or more diffusion models without extra training.\n\n2. The paper shows that the generated results can be better controlled by combining diffusion models with different specialties." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the problem of aggregating multiple diffusion models for more fine-grained control of the generated results. The paper proposes two steps: 1) use spherical aggregation to mix the latent diffusion representations and 2) use manifold optimization to bring the aggregated result to samples with high probability. The paper shows that the proposed method can achieve better control by mixing two diffusion models with different specialties." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Some basic metrics of the aggregated performance should also be provided, e.g., DINO, FID, and CLIP score, and compared to the model outputs without aggregations.\n\n2. No ablation study result is provided. How should the performance change if we use other aggregation methods (e.g., linear aggregation), if we do not use the manifold optimization, or if we use the manifold optimization with various hyperparameters?\n\n3. Compared to the other approach, where training is applied to improve the fine-grained control of the diffusion model generation, the proposed approach requires incorporating multiple diffusion models during inference, increasing the inference costs. Including results from those methods as a reference is also beneficial. It may also result from some more advanced foundational models that are known to follow text better, such as SD3 and FLUX." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "* Line 149 \"analyzes\" -> \"analyze\"\n* Line 256: Do the deviations happen in practice?\n* Lines 264-268: The authors mention that they propose to shift the intermediate latent point by performing gradient ascent wrt $p_{\\theta_1}(x_{t-1}|x_t)$. First of all, since this is a Gaussian we know it in closed-form so why use gradient ascent and not just target the specific value of the density? Secondly, I think that the authors meant the log-density and not density (correct me please if I'm wrong)? Thirdly: When I look at the proof I do not see the authors actually using gradient ascent. In the proof I do not understand where Equation (11) came from. How is it derived? Why is this the objective?\n* Line 280: What does it mean that the \"new variable contains information from $p_{\\theta_2}$\"? Can the authors make this statement more precise?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* I think that it easy to understand the method proposed by authors.\n* The figures are neat and easy to understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a method for aggregating diffusion models of the same kind. It consists of two steps: spherical aggregation and \"manifold optimization.\" Empirically, the authors demonstrate its effectiveness on multiple models and tasks, showing its superiority over individual models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The biggest weakness of this work is the **lack of contextualization relative to prior work**. The authors do not mention any methods designed for composing generative models. I can suggest a few papers to start from: [5, 6, 7, 8]. The authors need to discuss those and other available methods for compositional generation, specifically:\n * How is their method different from others?\n * What potential unsolved problems it tackles?\n * **Importantly**: How does it compare experimentally?\n* **Lack of ablation.** The authors propose some design choices like the choice of hyperparameters ($w, \\eta_1, \\eta_2, s$) and do not discuss how this choice was made and what impact does it have. I would like the authors to discuss the following:\n * How is the performance affected for different values of the above mentioned hyperparameters? How were they chosen?\n * **Importantly:** The authors propose spherical interpolation for combining information from different latent representations and argue that this is motivated by prior work [1]. How does this choice affect performance? How does it compare to an even simpler method like linear interpolation?\n* **Limited contribution.** The authors summarize their contributions in Lines 088-101, which can be split into two: novel method for combining diffusion models of the same type and insights about diffusion models generative process. I will comment on each separately:\n * The authors do propose a method for combining diffusion models. There is an inherent limitation that this method only applies to diffusion models of the same kind (i.e. the same encoder, the same SDE, same noise schedule etc.). Furthermore we do not know if it is novel nor how it compares with other methods (see 1st weakness).\n * The authors claim that they make two observations about diffusion models in general: \n * First: \"Diffusion models with a shared theoretical foundation possess the same mathematical essence, even if they differ in architecture, allowing operations on their intermediate variables\". This is a trivial observation. This is because they approximate the same SDE. It has even been shown (Figure 7 in [2]) that latent representations of images coincide even if different model architectures are used to parametrize the score function. Furthermore, it is known (Figure 2 in [3]) that if you train two score-based diffusion models on two disjoint subsets of training data, they will generate the same image if conditioned on the same latent code, showing that it is the distribution of the data and the specification of the forward process that determine the generative distribution rather than specific architectures or data subsets.\n * Second: \"diffusion models initially focus on the generation of features such as position, attributes, and style, while later stages emphasize generation quality and consistency.\". First of all I do not believe that the authors demonstrate this apart from one sentence mentioning it in lines 487-489: \"it can be inferred from the aggregation steps that the diffusion models initially focus on features such as position, attributes, and style, while later stages emphasize generation quality and consistency.\". Second of all this is also a known observation discussed e.g. in [4].\n* **Lack of mathematical rigor.** For example:\n * Definition 1 - This definition is not a precise mathematical definition. I would like the authors to clarify what they mean by $D$. Is this supposed to be the support of the distribution $p(x_t|y)$? Or the typical region of this distribution [9]? According to the explanation below \"all possible data generated by a diffusion process\". This is not a useful definition, because for any $t > 0$ this is the whole of $\\mathbb{R}^n$. This is because we are convolving the data distribution with another distribution whose support is $\\mathbb{R}^n$. I presume that the authors rather meant the typical set, but this needs to be clarified.\n * Line 195 The authors refer to [1] to claim that $D$ will reside on an $(n-1)$ manifold. I am confused by how this is relevant. [1] assumes that data lies in an $l$-dimensional linear subspace. Are the authors making the same assumption? Also, [1] proves that the \"noisy\" manifold is \"concentrated\" on some manifold, where \"concentration\" is very precisely defined. Do the authors share the same definition when talking about the definition of $D$? This paragraph is not rigorous and more confusing rather than helpful.\n * Line 230: What is the \"latent space encoder\"? This has not been properly defined. I assume the authors mean different latent diffusion model, but sharing the VAE encoder? I.e. the mapping from data to $z_0$? Are $x_t$ latent? I do not think that latent diffusion was actually defined in the paper mathematically.\n * Line 262: Authors write \"A Gaussian sample is likely to be drawn near the peak\" Since this is not a precise statement, I assume that the authors mean that all samples are concentrated near the mean and this is a false statement. See [9] for an explanation.\n * Misuse of the term \"manifold\". For example: The \"manifold optimization\" step proposed by the authors is essentially shifting a sample closer to the mean of the Gaussian. What is the manifold there? I think that this is a simple procedure that is described in unnecessarily complicated terms.\n\n---\n\nReferences \n\n[1] Chung et al. \"Improving Diffusion Models for Inverse Problems using Manifold Constraints\" (NeurIPS 2022)\n\n[2] Song et al. \"Score-Based Generative Modeling through Stochastic Differential Equations\" (ICLR 2021)\n\n[3] Kadkhodaie et al. \"Generalization in diffusion models arises from geometry-adaptive harmonic representations\" (ICLR 2024)\n\n[4] Deja et al. \"On Analyzing Generative and Denoising Capabilities of Diffusion-based Deep Generative Models\" (NeurIPS 2022)\n\n[5] Du et al. \"Compositional visual generation with energy based models\" (NeurIPS 2020)\n\n[6] Du et al. \"Reduce, reuse, recycle: Compositional generation with energy-based diffusion models and mcm\" (ICML 2023)\n\n[7] Garipov et al. \"COMPOSITIONAL SCULPTING OF ITERATIVE GENERATIVE PROCESSES\" (NeurIPS 2023)\n\n[8] Du et al. \"Compositional Generative Modeling: A Single Model is Not All You Need\" (ICML 2024)\n\n[9] Nalisnick et al. \"Detecting Out-of-Distribution Inputs to Deep Generative Models Using Typicality\" (arXiv)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The image generation quality of the baseline methods seems a bit low. What are the SOTA methods currently in 2024? Have the authors consider works such as [3-4]? Will ADMD still be useful when the baselines are stronger?\n\n[3] Mou, Chong, et al. \"T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 5. 2024.\n\n[4] Feng, Weixi, et al. \"Training-free structured diffusion guidance for compositional text-to-image synthesis.\" arXiv preprint arXiv:2212.05032 (2022)." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is mostly well-written and easy to follow. The experimental results match the authors' claim that AMDM combines the advantages of different models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed to boost Stable Diffusion based image generation by aggregating multiple diffusion models. The authors interpolate the updates of different denoising models and then shift the interpolation toward the high-density area of one of the models. The authors show that such aggregation can improve the generation quality of existing T2I models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "In general, I feel that the paper lacks a bit of novelty. The idea reminds me of previous works on the compositionality of diffusion models which are somewhat ignored in the paper. Some particular design choices are also not well-justified. See details below.\n\n1. The manifold argument is a bit far-fetched. Eq. 9 is not spherical linear interpolation in general. Manifold optimization typically refers to numerical optimization on manifolds but here it is merely a buzzword. The definitions in Sec. 3.2 add very few to the paper - they are barely useful for the derivation and neither do they provide much insight.\n2. Combining the updates of multiple diffusion models is not new. This has been extensively explored first in EBMs [1] and later in DMs [2]. Maybe there is some merit in using \"spherical aggregation\" specifically for combining these updates, but its necessity is not evident in the paper. For example, what if the authors just use linear interpolation? How will this affect the generation? Such comparison is important in this context.\n3. Typos here and there. Multiple mistakes from lines 226 to 228. I don't know what is going on with Figure 2. There are other typos and I suggest the authors polish the paper.\n\n[1] Du, Yilun, Shuang Li, and Igor Mordatch. \"Compositional visual generation with energy based models.\" Advances in Neural Information Processing Systems 33 (2020): 6637-6647.\n\n[2] Liu, Nan, et al. \"Compositional visual generation with composable diffusion models.\" European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Aggregation of multi diffusion models for enhancinglearnedrepresentations, achieving fine-grained control." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024aggregation,\ntitle={Aggregation of Multi Diffusion Models for Enhancing Learned Representations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xNaPs8bdLa},\nnote={under review}\n}" }, "abstract": { "value": "Diffusion models have achieved remarkable success in image generation, particularly with the various applications of classifier-free guidance conditional diffusion models. While many diffusion models perform well when controlling for particular aspect among style, character, and interaction, they struggle with fine-grained control due to dataset limitations and intricate model architecture design. This paper introduces a novel algorithm, Aggregation of Multi Diffusion Models (AMDM), which synthesizes features from multiple diffusion models into a specified model, enhancing its learned representations to activate specific features for fine-grained control. AMDM consists of two key components: spherical aggregation and manifold optimization. Spherical aggregation merges intermediate variables from different diffusion models with minimal manifold deviation, while manifold optimization refines these variables to align with the intermediate data manifold, enhancing sampling quality. Experimental results demonstrate that AMDM significantly improves fine-grained control without additional training or inference time, proving its effectiveness. Additionally, it reveals that diffusion models initially focus on features such as position, attributes, and style, with later stages improving generation quality and consistency. AMDM offers a new perspective for tackling the challenges of fine-grained conditional control generation in diffusion models: We can fully utilize existing conditional diffusion models that control specific aspects, or develop new ones, and then aggregate them using the AMDM algorithm. This eliminates the need for constructing complex datasets, designing intricate model architectures, and incurring high training costs. Code is available at: https://github.com/Hammour-steak/AMDM" }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Diffusion Models", "Conditional Generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/6253dedcfe9a5bb700d6ee7de5398bdc1e44ad0a.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/1514b6a68689872d405da4d105f942e77058fb22.zip" }, "title": { "value": "Aggregation of Multi Diffusion Models for Enhancing Learned Representations" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xNf8sOtFbx
On the Cost-Effectiveness of Partially-Annotating Methods for Multi-Label Learning
main
Active
Partially-annotating;multi-label learning
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;5;5;8
5;5;5;5
3;3;2;4
1;3;2;3
3;3;3;4
5.25
5
3
2.25
3.25
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "No specific questions" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "I like this paper. I think this paper effectively underscores \"the importance of data quality over quantity in the multi-label learning domain\"\n, backed by robust experimental design. \n\nThe methodology for comparing partially-annotating methods and the associated annotation costs is well thought out and executed, leading to convincing results that align with the \"quality over quantity\" insight." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the challenge of costly and time-consuming annotation in multi-label learning tasks by evaluating the cost-effectiveness of two partially-annotating methods: label-level partially-annotating (LPA) and instance-level partially-annotating (IPA). Through empirical experiments on the MS-COCO dataset, the authors demonstrate that IPA significantly outperforms LPA in terms of model performance, despite requiring fewer annotated instances. The study provides insights into the benefits of preserving co-occurrence relationships in annotations, highlighting that the quality of data can outweigh the quantity in training effective models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "More of a discussion point than a weakness. \n\nWhile low-quality data generally yields subpar results, previous work [1,2] has shown that large-scale partially-annotated datasets can be created without annotation costs from image-text pairs, leading to strong generalization (zero-shot performance). Consequently, pretraining on such large-scale partially-annotated data followed by fine-tuning on fully-annotated data may be an appropriate approach towards powerful tagging models.\n\nI encourage the authors to consider exploring in the context of such larger dataset settings in future work.\n\n\n[1] Tag2Text: Guiding Vision-Language Model via Image Tagging, ICLR 2024.\n\n[2] Recognize anything: A strong image tagging model. CVPR 2024 workshop." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the weaknesses above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper introduces three distinct variants of LPA, which have different randomness.\n\nThis paper compares LPA and IPA, finding that IPA performs better given the same annotation cost." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Becasue fully annotating multi-label datasets is often impractical, this paper focuses on partial annotations. There are two primary types: (1) label-level partially-annotating (LPA), which annotates only a subset of the labels for each instance. (2) instance-level partially-annotating (IPA), which annotates a subset of the instances. This paper empirically evaluates LPA and IPA at the same annotation cost. Extensive experiments indicate that IPA preserves co-occurrence relationships, resulting in better performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper explores two annotation settings that deviate from the mainstream partial multi-label setting. Please see Table 1 of DualCoOp: Fast Adaptation to Multi-Label Recognition with Limited Annotations.\n\nA major issue in partial multi-label is the infeasibility of complete labeling due to the high number of categories, leading to potential omissions and errors. However, IPA annotates all labels for selected images, meaning it does not fully address such issue. Additionally, this work does not introduce a new approach.\n\nPartial multi-label settings extend beyond LPA and IPA. Most mainstream methods annotate only a subset of labels per image, a topic this study does not discuss or analyze.\n\nTable 1 should include fully labeled experimental results.\n\nThe benchmark used is limited. commonly datasets like VOC, NUSWIDE, and CUB should be included for comparison." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See the weakness above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The study of the cost-effectiveness of different labeling methods sounds quite interesting and has significant guidance and meaning for industry applications.\n\n2. The experimental results are relatively comprehensive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper compares label-level partial annotation (LPA) and instance-level partial annotation (IPA) in multi-label learning tasks to determine which is more cost-effective. The authors manually annotated MSCOCO images using both methods and found that IPA, despite annotating fewer examples, yielded significantly better model performance than LPA. The paper suggests IPA's superiority is due to its preservation of label co-occurrence relationships, which helps models capture correlative patterns." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper only discusses two labeling methods. Can other labeling methods be included for discussion, such as in reference [1]?\n\n2. LPA adopts a single positive label labeling method, which is not the traditional partial label setting [2]. If it were the traditional partial label setting which is more practical than the single positive label setting, would the analysis and conclusions of this paper still hold?\n\n3. Defining the cost of different labeling methods is highly uncertain because, in the actual labeling process, in addition to process design, the proficiency and fatigue level of the labelers must also be considered, which can cause uneven costs. How did the authors consider this issue?\n\n[1] Shen L, Zhao S, Zhang Y, et al. Multi-Label Learning with Block Diagonal Labels. ACM Multimedia 2024. 2024.\n\n[2] Chen T, Pu T, Liu L, et al. Heterogeneous semantic transfer for multi-label recognition with partial labels. International Journal of Computer Vision, 2024: 1-16." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Do the paper's conclusions still apply to other contemporary algorithms, e.g., SARB[1], DualCoOp++[2], HST[3]?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This work offers a compelling and important motivation, providing essential guidance for future multi-label regression (MLR) research.\n2. A comprehensive analysis of extensive experimental results reveals the underlying reasons in detail." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Unlike existing studies, the authors explore an intriguing question in multi-label learning: should we partially annotate at the label level or the instance level? Through extensive experiments and a proposed causal reasoning framework, they demonstrate that instance-level partial annotation (IPA) maintains complete co-occurrence relationships, which proves more beneficial for enhancing multi-label regression (MLR) model performance compared to label-level partial annotation (LPA)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The discussion of MLR-PL is not sufficient, ignoring some recent work (e.g., SARB[1], DualCoOp++[2], HST[3]).\n2. Comparison algorithms are somehow outdated.\n\n[1] Semantic-Aware Representation Blending for Multi-Label Image Recognition with Partial Labels, AAAI 2022. \n[2] DualCoOp++: Fast and Effective Adaptation to Multi-Label Recognition With Limited Annotations, TPAMI 2023. \n[3] Heterogeneous Semantic Transfer for Multi-label Recognition with Partial Labels, IJCV 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024on,\ntitle={On the Cost-Effectiveness of Partially-Annotating Methods for Multi-Label Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xNf8sOtFbx},\nnote={under review}\n}" }, "abstract": { "value": "Precisely annotating instances with multiple labels is costly and has emerged as a significant bottleneck in the real-world multi-label learning tasks. To deal with this problem, the most straightforward strategy is partially-annotating, which aims to reduce the cost by annotating only a subset of labels. Existing works mainly includes label-level partially-annotating (LPA), where each instance is assigned a subset of positive labels, and instance-level partially-annotating (IPA), where all positive labels are assigned to an instance, but only a subset of instances are annotated. However, these methods tend to focus on improving model performance under each type of partial annotation, often neglecting a fundamental question: \\textit{which method is the most cost-effective?} In this paper, we empirically evaluate which partially-annotating method achieves better model performance at the same annotation cost. To make a fair comparison, we manually annotated images in the MS-COCO dataset using two partially-annotating methods and recorded their averaging annotation time per image. This allows us to train models on two types of partial annotations with the same annotation cost and to compare their performance. Empirical results show that even when the number of examples annotated with IPA is only one-fifth that of LPA, models trained on IPA annotations significantly outperform those trained on LPA annotations, yielding that IPA is significantly more cost-effective than LPA. To explain the superiority of IPA, our causal reasoning framework shows that compared to LPA, IPA preserves complete co-occurrence relationships, enabling the model to capture correlative patterns, which is useful for improving model performance." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Partially-annotating", "multi-label learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d660a84364b344235b32690abec3837fbbca3221.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "On the Cost-Effectiveness of Partially-Annotating Methods for Multi-Label Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xNgmEWmd9T
Accumulator-Aware Post-Training Quantization for Large Language Models
main
Active
Accumulators;Deep Learning;Inference;Quantization
infrastructure, software libraries, hardware, systems, etc.
5;5;5;6
3;4;2;3
3;3;2;3
2;2;2;3
2;2;3;3
5.25
3
2.75
2.25
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The paper designs AXE as extensions on top of existing PTQ algorithms: GPFQ and OPTQ, claiming them as state of the arts. While OPTQ is OK, but GPFQ is an earlier work. Please justify why those two quantization algorithms are picked and comment the applicability of AXE to other PTQ methods.\n\nThe accumulation aware approach was motivated from the implementation perspective, but results are only evaluated from the model accuracy performance. Yes, the AXE is effective in avoiding numerical overflow and preserve model performance. Is it possible to justify the accumulation aware approach in terms of latency or throughput in implementation? This can make the work more convincing." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "+ The accumulation-aware approach is well motivated from the hardware and implementation perspective, because when weights and activations are quantized into low-precision, the 32-bit accumulation consumes majority of power and area. And using low-precision on accumulation may increase the risk of numerical overflow which degrades model accuracy.\n\n+ The paper adopts an effective approach to theoretically gurantee overflow avoidance by constraining ||q||1 in post training quantization process. To solve the problem, AXE translate into two accumulator-aware constraints.\n\n+ The multi-stage accumulation extension of AXE is effective in improving throughput and scaling to large language models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work investigates post training quantization from an accumulator-aware perspective. They aim at using low-precision at accumulation while avoiding the overflow issue. The paper proposes the AXE, as a practical, low-overhead framework as extensions on top of two state-of-the-art PTQ algorithms: GPFQ and OPTQ. The work supports full datapath optimization and scales to large language models. They achieve improvements in the trade-off between accumulator bit width and model accuracy over baseline methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The adoption of the two PTQ algorithms GPFQ and OPTQ and the applicability of AXE to other PTQ need justification.\n\n- Because the concept of accumulation aware quantization was proposed from the implementation perspective. It is more convincing to demonstrate the performance in terms of latency or throughput besides model accuracy.\n\nSee the questions section for more details." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- CPU cores, cude cores, tensor cores support different number formats, each with different (unknown) accumulator size. How quantizing accumulator lead to advantages for pre-training, fine-tuning, or inference of LLM models?\n- Using OPTQ and GFPQ as a baseline is interesting in terms of accuracy, but they lead to smaller models after their use. What AXE brings to the table practically?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Thi sis the first formal study of quantization on the accumulator size. The paper is well-written and easy to follow with theoretical justifications. The innovative idea appears in equation (17) as a layer-wise operation. The authors adapt this result for two well-known post-training quantization methods GPFQ, and OPTQ." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper is the first study on post-training quantization while keeping the accumulator in the picture. A low-bit accumulator risks to overflow, but speeds up the arithmetic computation. They use L1 constraint on the inner product counterpart to guarantee numerical stability in equation (2) and build a quantization method that controls the accumulator based on this result. \nThe L1 constraint is the same formulation as first introduced in compressed sensing by Donoho (1994) and later called the lasso of Tibshirani (1996) in the context of linear regression." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Although this is the first study on accumulators, I doubt its usefulness.\nOften, the accumulator size is hardware-dependent, and sometimes even unknown. There are ways to guess the accumulator size by running various experiments, but they are not revealed by the manufacturer. In the context of quantization only weights or weight-activation, the benefit is clear; I wonder how we can benefit from quantizing accumulators unless we design a new processor or a co-processor. This limits the impact of this study unless the authors provide a guideline on how to use the accumulator bit size in practice on certain processor." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Based on the two points I raised in Weakness, can you please\n\n1. Add actual performance metrics (eg. throughput) in Table 1 and a direct comparison to more related method that are actually integrated in LLM serving engines (vLLM)\n2. Extend the accuracy comparison to more LLM model famil" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well-written and easy to understand. The optimization it proposes concentrates on low-level hardware details that significantly differs from existing approaches in quantization research. Notably, the issue of accumulation round-off errors, which the paper addresses, is frequently overlooked by the Efficient AI community." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper tackles a specific challenge related to low-level hardware architecture, specifically targeting the accumulation process in a Post-Training Quantization (PTQ) setting. The accumulator design in current hardware architectures is prone to significant numerical deviations when numbers are heavily quantized. The paper examined two distinct cases: a singular accumulator and an accumulator within an adder tree's output. The analysis largely focuses on integer arithmetic. The paper's core objective is to enhance a subset of quantization algorithms, such as GPFQ, by proposing an L1-norm penalty for high-magnitude post-quantization weights, as these can introduce errors in subsequent accumulation stages." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper is too focused on quatnization that is associated with the low-level hardware architecture, making me feel ICLR may not be a very suitable venue for work like this. \n\nThe paper's presentation raises concerns regarding its background setup and evaluation. \n\nFirst, it fails to acknowledge a range of prior studies in this field, including LLM.int8, ZeroQuant, AWQ, and others. Moreover, the paper lacks comparative analysis with weight-activation quantization methods, making it difficult to gauge the effectiveness of the proposed technique relative to existing quantization research. Most current works aim to quantize weight values to alleviate pressure on HBM bandwidth. Typically, they dequantize model parameters and perform multiply-accumulate operations in higher precision (e.g., fp16). Without a clear comparative study on end-to-end GPU performance, it is challenging to understand the benefits of the suggested method, especially in memory-bound LLM inference, and how much advantage is gained from enabling accurate low-precision arithmetic operations.\n\nSecond, the proposed method is mainly evaluated on a single modern LLM family (Pythia). The model under the choice here (Pythia) is not a popular one, and this again, makes comparing to other methods very challenging. An obvious good candidate for an evaluation like this would be the LLaMA family models. The paper also extends the evaluation to the OPT and GPT2 models. However, both models are fairly small in size (< 1B) and also are fairly old." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "What does Table 4 try to deliever here? There is no description in the main context of the paper." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The AXE method is theoretically grounded.\n\n2. AXE fills a gap in PTQ by addressing overflow risks in low-precision settings, potentially benefiting deployment on resource-constrained hardware.\n\n3 The paper introduces a novel overflow handling approach in PTQ, potentially expanding PTQ’s applicability to larger models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces AXE, a framework for overflow-avoidant, accumulator-aware quantization in the PTQ setting, aimed at efficient low-precision deployment of large models. AXE extends existing PTQ methods like GPFQ and OPTQ by managing accumulator bit width to improve resource efficiency without requiring retraining, and supports multi-stage accumulation for scaling to large models. The authors report gains in task performances over PTQ baselines with reduced accumulator bit width." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. I didn't find any evidence of actual efficiency gain in the experiments apart from the bit width reduction and perplexity improvements.\n\n2. Currently the paper evaluates on perplexity and zero-shot reasoning tasks, this IMO is not enough, more NLP tasks need to be tested to consolidate the efficiency of the AXE method.\n\n3. Novelty: While AXE extends existing PTQ methods with accumulator-aware quantization, much of its methodology relies on previously established concepts. The theoretical contributions build on recent QAT-based methods, and the extension to PTQ, while practical, does not introduce a fundamentally new approach to quantization beyond overflow handling.\n\n4. Writing could be improved, in particular the topic introduction and storytelling. I had a hard time following the paper." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce a low-overhead framework for accumulator-aware post-training quantization that significantly improves the tradeoff between accumulator bit width and model accuracy in quantized large language models." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024accumulatoraware,\ntitle={Accumulator-Aware Post-Training Quantization for Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xNgmEWmd9T},\nnote={under review}\n}" }, "abstract": { "value": "Several recent studies have investigated low-precision accumulation, reporting improvements in throughput, power, and area across various platforms. However, the accompanying proposals have only considered the quantization-aware training (QAT) paradigm, in which models are fine-tuned or trained from scratch with quantization in the loop. As models continue to grow in size, QAT techniques become increasingly more expensive, which has motivated the recent surge in post-training quantization (PTQ) research. To the best of our knowledge, ours marks the first formal study of accumulator-aware quantization in the PTQ setting. To bridge this gap, we introduce AXE—a practical, low-overhead framework of accumulator-aware extensions designed to endow overflow avoidance guarantees to existing layer-wise PTQ algorithms. We theoretically motivate AXE and demonstrate its flexibility by implementing it on top of two state-of-the-art PTQ algorithms: GPFQ and OPTQ. We further generalize AXE to support multi-stage accumulation for the first time, opening the door for full datapath optimization and scaling to large language models (LLMs). We evaluate AXE across autoregressive language generation models and observe significant improvements in the tradeoff between accumulator bit width and model accuracy over baseline methods." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Accumulators", "Deep Learning", "Inference", "Quantization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/013e24958bd9095079c881a24930445097ba38bb.pdf" }, "presentation": null, "primary_area": { "value": "infrastructure, software libraries, hardware, systems, etc." }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Accumulator-Aware Post-Training Quantization for Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xNsIfzlefG
Discrete Distribution Networks
main
Active
Generative Models;Image Generation
generative models
3;6;8
4;4;4
2;3;4
2;3;4
2;2;4
5.666667
4
3
3
2.666667
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The proposed method is simple and seemly intuitive, and has been experimented on small-scale datasets like CIFAR-10." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes to generate samples in a sequential way: pass the output of the previous module into a function randomly selected from a set of K functions in the current module, with the initial input to the first module being a zero vector. To train each module in the space, one first collect a trajectory with these modules except that one does not do random sampling out of K functions but pick the one with the output closet to the data sample $x$. With this trajectory, one computes L2 distances between the output of each module and $x$ and optimize it. The paper shows some empirical results on the CIFAR-10 dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Purely based on the presentation of the paper, I find the proposed method neither theoretically attractive, as it does not explicitly model probability distributions or model some complex distributions, nor empirically useful due to its worse performance compared to simple baselines like DCGAN. Indeed, the paper claims that VQ-VAE is unsatisfactory (e.g., in the abstract) but it does not even compared against VQ-VAE.\n\nIf we take a closer look into the proposed method, it seems to me not too different from a slightly-different VQ variant of diffusion models (e.g., https://arxiv.org/abs/2111.14822) -- running discrete diffusion (as the proposed model is basically trained in a similar way) on a latent space represented by a codebook of features, probably with some additional gradients of hierarchical latent spaces. The paper fails to connect the proposed method to a lot of existing papers and explain 1) what is different and novel given these existing methods and 2) why these design choices." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* Can DDN scale to larger and higher resolution datasets, such as FFHQ 256x256, or ImageNet 256x256? \n* What are the parameter counts of the baselines compared (DC-GAN, IGEBM, VAE, etc.)? DDN benefits from more layers since it can refine more, so the baselines should have similar parameter counts. \n* How does a single shot DDN compare against a default DDN allowing for the same number of refinements?\n* Generating multiple samples in parallel for sampling is expensive. What are the memory requirements compared to the baselines in the paper?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The idea of sampling an image and autoregressively feeding it back in through each layer in a forward pass is novel to the best of my knowledge. \n* The ability to condition generation on external signals without gradients is a useful property. \n* DDN is showcased for a variety of applications such as inpainting, colorization, denoising, super-resolution, CLIP-guided editing. \n* A variety of tricks are provided to help training, such as the Split-and-Prune algorithm, chain dropout, residual learning, leak choice." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposes Discrete Distribution Networks (DDN) a new class of image generative models. Discrete Distribution Layers within a DDN output multiple images at a time, and one image is sampled and concatenated as a feature to input to the next layer in an autoregressive fashion. During training, the loss is a reconstruction loss of the output sample of each DDL that best matches with the input image. Several tricks are provided to help optimization. During sampling, external signal can be provided without using gradients in order to condition generation. Qualitative and quantitative results are provided for unconditional CIFAR-10, FFHQ, and Celeb-A generation in addition to qualitative results for MNIST." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* I understand that this paper serves as a proof-of-concept for a new type of generative model, but the model is validated on small scale datasets (e.g., CIFAR-10, MNIST, CelebA, FFHQ-64x64). Furthermore, it does not compare against more modern approaches such as diffusion models, and quantitatively lags behind DC-GAN. However, I do not think it is fair to hold this novel method to the same standard as more matured, modern approaches. \n* The method relies on generating multiple samples in parallel and requires multiple layers for further refinement. This seems quite expensive and hard to scale to higher resolutions. \n* The placement of tables and figures reduces the quality of presentation. For instance, Figure 9 has a large white space above it. Table 2 is presented on page 8 even though ablations are not addressed in text until page 10. Figure 5 showcases an experiment which is not addressed in the main text of the paper. So it would be more fitting in the appendix." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "While has some notable weaknesses, I think this is a very good paper that can open a door to new directions in generative modeling. It is important for me to point to the courage of trying new refreshing approaches (as opposed to building upon current trends). Such papers should always be judged under the understanding that for existing methods, a lot of engineering has taken place and should be thought of as the first GAN paper or the first Diffusion models paper (Sohl-Dickstein 2015)." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. I find the method novel and elegant.\n2. The ability to produce visualization of the hierarchical generation (figs 8, 18) is a very enlightening feature.\n3. The authors propose some practical techniques to deal with this non-differentiable sampling. The proposed \"split-and-prune\" trick is clever and elegant.\n4. The novelty is very strong, and this should not be overlooked. This is a whole new method, very different from any of the existing generative models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a new generative model, based on generating K examples at each layer and choosing the one closest to a ground-truth instance. This formulates a discrete hierarchical distribution. At inference, taking a random node at each layer provides a random sample. Both conditional and unconditional generation are demonstrated." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Theoretical analysis is missing. No mathematical derivation that shows why the distribution of generated images should converge to the real data distribution.\n2. Eventually, if I understand correctly, for unconditional generation, the model has a finite number of specific examples it can produce K^L. Even if this number is big, this suggests that the model essentially is a compressed way of storing all its possible results in a tree based database. This might make a problem to scale up. Since the demonstrated data relatively has few dimensions, this suggests that actually holding the entire dataset might require more or close storage to the model itself. A quick calculation to demonstrate: MNIST has 70k images (train+test) of 28x28, this is 54880000 integers: ~55MB. EDM smallest model has 62M parameters, even if we take these as 1 byte each it is heavier.\n3. I am adding the weakness of low quantitative results and lack of demonstration of higher scales. To be clear, I think it is legitimate for novel methods but it is still a weakness. (i.e. if this model would have produced SotA generation results it would have been rated higher)." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A Novel Generative Model with Simple Principles and Unique Properties." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024discrete,\ntitle={Discrete Distribution Networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xNsIfzlefG},\nnote={under review}\n}" }, "abstract": { "value": "We introduce a novel generative model, the Discrete Distribution Networks (DDN), that approximates data distribution using hierarchical discrete distributions. We posit that since the features within a network inherently capture distributional information, enabling the network to generate multiple samples simultaneously, rather than a single output, may offer an effective way to represent distributions. Therefore, DDN fits the target distribution, including continuous ones, by generating multiple discrete sample points. To capture finer details of the target data, DDN selects the output that is closest to the Ground Truth (GT) from the coarse results generated in the first layer. This selected output is then fed back into the network as a condition for the second layer, thereby generating new outputs more similar to the GT. As the number of DDN layers increases, the representational space of the outputs expands exponentially, and the generated samples become increasingly similar to the GT. This hierarchical output pattern of discrete distributions endows DDN with unique property: more general zero-shot conditional generation. We demonstrate the efficacy of DDN and its intriguing properties through experiments on CIFAR-10 and FFHQ." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Generative Models", "Image Generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b14ddac2767fd00a620e61ceb0a3d24411472757.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/a9dcd340ef3ff398a42f50eecbae9828f66a3ac8.zip" }, "title": { "value": "Discrete Distribution Networks" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xNwmWaq2KN
Novelty Unlocking with Multiobjective Generative Models: Batch Diversity of Human Motions
main
Active
Multiobjective optimization;Diverse In-Betweening Human Motions
generative models
5;5;6
4;2;3
3;3;3
2;2;3
2;3;3
5.333333
3
3
2.333333
2.666667
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. I am having trouble understanding the diversity component as defined in Equation 2. Isn't $C(Y)$ a single value between $0$ and $D-1$ and $P_c(Y)$ the probability of Y belonging to class $c$? Can you elaborate on how this definition ensures diversity?\n\n2. Can you explain how your model differs from simple rejection sampling, where $N$ motion sequences are iteratively generated and the $M<N$ sequences with the highest diversity (using any heuristic or method, such as pairwise distance maximization) are retained? Then, samples below a certain smoothness threshold are rejected, and this cycle repeats until $N$ samples are obtained.\n\n3. I believe it would be beneficial to compare the proposed method with straightforward rejection-based sampling approaches to highlight the unique advantages of your method.\n\n4. I highly recommend that the authors include recent relevant diffusion works of CondMDI (Cohan et al., 2024) and OmniControl (Xie et al., 2023) in the related work section (Section 2.2).\n\n5. As with any iterative or rejection-based method, inference time is a key consideration. What is the inference time with this method? How does it compare with SOTA methods?\n\n6. What is the data representation used here? Is it global joint positions? How are the keyframes defined?\n\n7. I am curious to know the details of the backbone generative models used in this work. Particularly, how they condition on keyframes. I think these details need to be added to the supplementary material." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The problem of diversity in human motion generation/completion is an important problem. One of the major limitations of most generative models developed for the in-betweening task, is the lack of diversity due to overfitting.\n* Paper is well-structured and easy to read.\n* The experiment section contains most of the important in-betweening baselines." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Authors propose a multi-objective framework for the human motion in-betweening task. More specifically, they introduce a bi-objective optimization problem; optimizing the diversity and smoothness of transitions between keyframes and generated motions. The proposed optimization framework is applied on top of a pretrained generative model to produce the final diverse completed motions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* My main concern with this work is the substance of the paper. To me, it primarily appears that the proposed framework suggests iteratively sampling from a pretrained generative model, rejecting samples that lack sufficient smoothness while ensuring diversity among the generated samples. The only substantial contribution seems to be the definition of a bi-objective function to balance both diversity and smoothness. However, I am not entirely convinced that this specific framing is necessary; simply rejecting samples based on smoothness and applying a diversity metric across the entire sample set might have been equally effective.\n\n* While I appreciate the authors' thorough investigation of related work and relevant baselines, some recent in-betweening studies are missing from the discussion and experiments sections. For example, CondMDI (Cohan et al., 2024) and OmniControl (Xie et al., 2023) are notable recent works that should be considered.\n\n* The effectiveness of the proposed method relies heavily on the performance of the underlying generative model. If the backbone generative model has limited diversity, the iterative framework is unlikely to significantly enhance diversity." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Questions:\n- Q1: What is a \"Hamilton\" distance? Do you mean Hamiltonian distance or Hamming distance?\n A web search produced no results for this concept.\n- Q2: Wondering why the \"+P_C(Y)\" term is really needed, i.e., does it add value? \n The addition to the classification category number is quite strange, given that they represent different things.\n- Q3: how do FID_tr and FID_te differ?\n- Q4: It is still unclear how the variable-length motions are encoded and sampled.\n- Q5: It is not clear how the generative model is conditioned on the previously-generated samples.\n Does this happen implicitly via the memory/GRU? If so, how is the GRU update trained so as to be incentivized\n to make the next sample highly-diverse with respect to previous samples? This reader is very confused on this point." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- an interesting problem, i.e., maximizing diversity within a batch of samples coming from a generative motion model.\n The specific problem being tackled here, i.e., generating diverse ways to produce a bridging motion that connects \ntwo existing motion sequences is interesting, although it is quite a specific problem domain.\n- Reasonablly extensive quantitative evaluations\n- Potentially an interesting and novel method for achieving the diversity, although I still do not understand it all\n- the applicability of this as a method to different classes of generative models is a real plus, i.e.,\n GANs, VAEs, and diffusion models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "A method is presented for generating batches of diverse examples for motion in-between problems, via generative modeling,\ni.e., a VAE, GAN, or DDPM model. Thus given two motion sequences A and B, the goal is to be able to generate diverse sequences \n(possibly of varying length) that connect the end of A to the start of B. This is formulated as a multiobjective optimization problem,\nwhereby a population of samples is repeatedly adapted to capture the pareto front of this optimization.\nKey to the method is a classifier C(Y) that can categorize the motion-type of a motion Y into one of D distinct categories.\nThe multiobjective optimization is defined in a 2D space defined by two functions, F1(Y) = alpha1(Y) + Beta(Y) \nand F2(Y) = (1-alpha1(Y)) + Beta(Y).\nAlpha1(Y) maps the most-likely class evenly along the interval [0,1], (plus an epsilon that is related to the likelihood of that class).\nBeta(Y) defines a smoothness loss. Points along the pareto front thus seek to maximize class diversity, as well as being as\nsmooth as possible. Diverse samples are generated by iteratively calling the generative network, each time conditioning\nthe network on the motions to be connected, e.g., A and B as above, and the already-generated samples, Y_i.\nA memory-augmented Transformer-based architecture is used as the encoder for a conditional VAE, to generate new samples.\nThe results also present GAN and DDPM generative results.\n\nThe method is trained and tested on 4 different motion datasets (BABEL, HAct12, NTU, GRAB) and evaluated\nbased on FID and a diversity metric (APD), and two others (ACC, ADE).\nThe method produced the best results for FID and diversity, particularly for the DDPM version.\nIn addition to the quantitative results, qualitative results are provided using a visualization of the pareto front,\nand a figure that illustrates 4 diverse samples generated for 2 example problems.\nNo video is included." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Weaknesses:\n- It is difficult to understand the given multiobjective framework.\n As best I understand it, is described by a 2D pareto optimal front, as described in the summary I've given above.\n In particular, the intuition of equation should be described. Figures and diagrams would really help the exposition.\n The specific motivation and geometric intuition for alpha1(Y) could be explained, and how it encourages diversity.\n This reader spent was stuck on equation (2) for a long amount of time.\n- The multi-class classifier C(Y) is key to the method, but is not described in detail in the results and experiments,\n unless I missed it. This leaves the reader confused about the intent of this classifier, and its importance.\n- There are no video results, unless I missed it as supplemental material (I did check OpenReview twice for this).\n This makes it really difficult to judge the quality and diversity of the output in practice.\n- A variety of things about the method that were difficult for this reader to understand -- see questions below.\n\nMinor comments:\n- The structure of the paper was challenging for this reader. Leading with Figure 3, then Figure 2, and then Figure 1\n is an alternate order that makes more sense to this reader.\n- L047: \"batch diversity\" is ambiguous, i.e., it could mean intra-batch or inter-batch diversity.\n- the notation of \"decision variables\" comes as a surprise. Perhaps it is standard in multiobjective optimization,\n but it is worthwhile motivating in the current context. Is it simpler to be thinking in terms of \"samples\"\n and \"sample space\"?\n- the Pareto Dominance math is more intuitive when helped by a related figure\n- related works on animation and pareto-frontiers:\n \"Sampling of Pareto-Optimal Trajectories using Progressive Objective Evaluation in Multi-Objective Motion Planning\"\n \"Diverse motion variations for physics-based character animation\"\n- \"in-betweenning\" should probably be \"in-betweening\"\n- eqn (4): the addition of beta(Y) here seems like a bit of a hack. \n It would also be nice to more generally understand why the smoothness component is needed, i.e., why can't the\n conditional generative model capture this? Also, Beta(Y) in eqn (3) appears to be a vector, whereas in eqn(4) it is scalar.\n Which is it?\n- \"the generative model is prompted\": in the age of LLMs, this is an overloaded (and therefore ambiguous) phrasing,\n as there are no LLMs involved.\n- L378: \"Action Accuracy (ACC), Action Accuracy (ACC)\" (sic)\n- Figure 4: labeling the default locations of the D classification categories would help interpret the pareto-front figure\n- Figure 5: It is difficult to interpret whether to read the figure on the left, from left-to-right or right-to-left" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. **Computational Overhead of the Evolutionary Algorithm:** EA will introduce extra computational cost during inference time, especially when using expensive diffusion models. Could the authors elaborate on what the computational overhead is compared with a stand-alone generative model?\n \n\n2. **Comparison with Conditional Generative Models:** Given the classifier that the authors trained, one might imagine training a conditional generative model based on the classifier. Could the authors explain why using an EA would be better than training a motion-class-conditioned generative model?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The authors' formulation of the diverse motion generation problem as a multi-objective optimization problem is novel. The derived general generative framework effectively guides the sampling process towards diverse motions, successfully balancing diversity and quality in generated in-betweening human motion sequences." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the problem of generating diverse in-between motions by formulating it as a multi-objective optimization problem. The proposed Multiobjective Generation Framework with In-Betweening Motion Model introduces an evolutionary algorithm at the inference stage to enhance motion diversity while maintaining motion quality." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Conditional diffusion models, such as those presented in [1] seem to solve the same problem using generative models with conditional diffusion. By changing the conditions, they can also generate diverse motions. It would be helpful if the authors could compare their approach to these methods or explain why the multiobjective formulation is necessary, and whether it can be further combined with conditional motion generators. Moreover, a discussion over the difference and advantage in terms of formulation compared with [1] would enhance the quality of the paper. \n\n[1] S. Cohan, G. Tevet, D. Reda, X. B. Peng, and M. van de Panne, “Flexible Motion In-betweening with Diffusion Models,” in ACM SIGGRAPH 2024 Conference Papers, in SIGGRAPH ’24. New York, NY, USA: Association for Computing Machinery, Jul. 2024, pp. 1–9. doi: 10.1145/3641519.3657414." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Unlocking Novelty through Multiobjective Generative Models: A Study on Diverse In-Betweening Human Motions within Batch Diversity" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024novelty,\ntitle={Novelty Unlocking with Multiobjective Generative Models: Batch Diversity of Human Motions},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xNwmWaq2KN},\nnote={under review}\n}" }, "abstract": { "value": "Current generative models have shown potential performance in many tasks, which typically focus on generating samples that closely adhere to a given distribution, often overlooking the requirement to produce optimal diverse solutions in a batch diversity.\nRecognizing that maintaining ``diversity\" has been a longstanding challenge in multiobjective optimization, we were inspired to introduce a multiobjective optimization approach to enhance diversity in a single pass.\nThis paper utilizes the in-betweening human motion generation task as an example and introduces the multiobjective generative models to demonstrate the effectiveness of the proposed method in producing diverse and smooth human motion sequences. The resulting method, termed the \\textit{Multiobjective Generation Framework with In-Betweening Motion Model} (MGF-IMM), frames the human motion in-betweening task as a bi-objective optimization problem. The designed in-betweening motion model is then integrated into a nondominated sorting-based optimization framework to address this bi-objective optimization problem.\nThrough comprehensive qualitative and quantitative experiments, MGF-IMM has demonstrated state-of-the-art performance, surpassing the latest methods and validating its superiority in generating diverse in-betweening human motions." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multiobjective optimization;Diverse In-Betweening Human Motions" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/ad34ddd0850e5372f6f801e6d0c25b7e66fc4681.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Novelty Unlocking with Multiobjective Generative Models: Batch Diversity of Human Motions" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xOZYU67EKL
MMD-NSL: Mixed Multinomial Distribution-based Neuro-Symbolic Learning
main
Active
Neuro-Symbolic Learning;Multinomial Mixture Distribution
neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
3;3;5;5;6
4;4;3;2;4
2;3;2;3;3
1;2;2;3;3
1;2;1;2;3
4.4
3.4
2.6
2.2
1.8
-0.375
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See the section on weakness. We will increase the score based on the answer to the question." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The proposed MMD-NSL framework is novel in its combination of mixed multinomial distributions and bijective mapping between MLNs and continuous multinomial distributions. This dual approach uniquely extends MLNs to incorporate context-aware embeddings, bridging symbolic logic and continuous representations, which is both creative and forward-thinking.\n\nThe paper’s bilevel optimization strategy is technically sound and thoughtfully designed, enabling a transformer-based upper level to dynamically learn context-sensitive mixing coefficients while optimizing rule weights in the lower level." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a framework for neuro-symbolic learning called Mixed Multinomial Distribution-based NSL (MMD-NSL). The primary aim of the framework is to address two core challenges in NSL: managing long dependency chains and handling complex semantic categorization within knowledge graphs. This work propose a mixed multinomial logic semantic distribution to integrate both context-aware semantics and long-range dependencies, building upon traditional Markov Logic Networks.\n\nThe framework leverages a bilevel optimization strategy: the upper level, powered by transformer-based architectures, dynamically learns mixing coefficients analogous to attention mechanisms, while the lower level optimizes rule weights to capture context-specific dependencies." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.Lack of Comparisons with More Diverse Baselines: The paper does not benchmark against a sufficiently broad spectrum of established baselines, which is essential for assessing relative performance comprehensively.\n\n2.Distinction between Neuro-symbolic Learning and Causal and Temporal Reasoning: The differences between neuro-symbolic learning you've pointed out and causal reasoning, temporal reasoning remain unclear. Clarifying how neuro-symbolic learning diverges from or intersects with causal and temporal reasoning approaches could enhance understanding of its unique capabilities and limitations.\n\n3.Lack of Detailed Description for Training and Optimization Settings: The paper does not sufficiently detail the settings for training and optimization, which are crucial for replicating the results and understanding the model's performance under specific conditions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The author uses a bilevel optimization algorithm, can further elaborate its efficiency?\n2. The algorithm for solving the bilevel optimization does not seem to obtain the optimal solution. Can its convergence be proved?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The introduction of the Mixed Multinomial Distribution in the field of NSL for addressing complex classification issues exhibits novelty. \n2. Both theoretical and empirical analyses are comprehensive, with theoretical findings establishing MMD-NSL as a more general form of classical NSL, thereby making a contribution to the NSL field." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents the Mixed Multinomial Distribution-based NSL (MMD-NSL) framework, which adeptly addresses the challenge of complex categorization by seamlessly integrating the handling of long dependency chains with complex semantic categorization within KGs. Extensive experimental results demonstrate the effectiveness of the proposed method'." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "In general, the experiments of the paper are relatively weak. Firstly, there is a limited comparison of the proposed algorithm with other notable approaches such as DeepProblog [1] and Semantic Loss [2]. Secondly, despite the authors conducting experiments on numerous datasets, these datasets appear to be predominantly toy and simplistic.\n\nReferences:\n[1] DeepProbLog: Neural Probabilistic Logic Programming\n[2] A Semantic Loss Function for Deep Learning with Symbolic Knowledge" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Refer to the weaknesses section for the unclear points." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThe bijective mapping between MLNs and continuous multinomial distributions is a valuable contribution.\n2.\tIncorporating ontology information into rule learning is a sensible approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper develops an embedding-based, optimizable probabilistic framework that can be bijectively mapped to the MLN framework. Within this framework, the rule discovery and reasoning tasks are extended by incorporating the ontology information of entities in the rule head. A bilevel optimization strategy is employed to optimize the proposed model. Experimental results demonstrate the superiority of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tOntology information is not fully utilized in the framework; it should also be applied to the tail part of the rule. This enhancement would make the paper more comprehensive and increase its contribution.\n2.\tThe experimental design lacks a key analysis: how the two components—the ontology information and the MLN-based probabilistic framework—individually contribute to the observed improvements.\n3.\tCertain aspects of the algorithm are unclear. For instance: 1) How are the rules generated? 2) How is n_j calculated for each newly generated rule?\n\nThe paper addresses some important points in this field but does not yet meet the standards for the conference." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please see the weaknesses section" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The experiments show consistent improvements over state-of-the-art NeSy systems on a real-world dataset." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work presents Neuro-Symbolic Learning based on Mixed Multinomial Distributions (MMD-NSL) with the aim of improving performance in handling complex relationships within Knowledge Graphs (KG). To this end, theoretical relations between the presented method and state-of-the-art methods are presented. The performance of MMD-NSL is evaluated on a synthetic and a real-world data benchmark." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The presentation overall lacks clarity and needs further refinement. This includes many grammar errors and issues such as exact repetitions within the same page (e.g., \"establishing a bijective mapping between MLNs and continuous multinomial distributions,\" both on lines 66 and 76).\n\n2. The Figures do not support the significance of the contribution well. Figure 1 takes up half a page to demonstrate that a simple synthetic target could be learned. Figure 2 takes up an entire page with a questionable amount of information for the reader. Also, in Figure 2, labels are badly formatted (overlapping numbers) with misleading and inhomogeneous color mapping (e.g., a difference of 0.01 changing the hue completely). Again, some phrases are nearly identical, e.g., between Figure 1 caption and lines 442-446.\n\n3. Given the nature of this contribution, namely a novel NeSy learning architecture, it would be highly desirable to compare performance across multiple datasets instead of a single one.\n\n4. The mathematical presentation is, at times, lacking. For example, in the introduction and overview sections, logical rules are presented as a head implying the body (r_head -> r_1, r_2, ... ,r_n). Traditionally, the opposite is the case, with r_head <- r_1, r_2, ... ,r_n being the way Horn Clauses are written in implication form." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1) Can you make a clear example (and maybe it could be useful to add it to the main paper) about long dependency chains and complex \ncategorization that you mention all across the paper?\n2) \"We denote zj | Ck as the j-th rule variable conditioned on the k-th context variable, which specifically corresponds to the structured relation ⟨C(h), rhead,C(t)⟩ → r1 ∧. . .∧rL.\" Why C is not C_k and r_head r1, ..., rL do not depend on j?\n3) \"where zj = 1 if the rule zj holds true in G and −1 otherwise\" what does it mean that the rule zj holds true in here? It is not explained if each rule is universally quantified or it is ground, wrt which variables or objects, what are the known facts, if it is assumed CWA. \n4) What do denote f_j and z_j in equation 1? According to the beginning of section 3.2 z_j denotes a latent variable for a rule. How this conciliates with MLN where you have rules and each f_j(z_j) is just n_j number of groundings satisfying the rule. Or similarly in PSL with fuzzy relaxations or in NLMN with neural potentials?\n5) \"Following Definition 1, there are two types of Logic Semantic Variable:\" Why only these two types?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The aim and the idea behind the paper seem interesting and the experimental results are good. However I found the paper too confusing, and it is very difficult to assess the concrete contribution of this paper in its current form (see weaknesses and questions)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces the MMD-NSL framework, which extends classic MLNs by incorporating context-aware semantic embeddings, in order to both handling long dependency chains and addressing the complexity of semantic categorization within KGs. This is achieved by introducing a continuous Mixed Multinomial Logic Semantic Distribution. The framework exploit a bi-level optimization exploiting transformer-based attention mechanism in an upper level, while also learning rule weights at the lower level. MMD-NSL shows significant improvements wrt existing sota approaches across several datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Related work discussion is quite limited, for a topic that has been extensively studied by different spots. For instance, modelling logic rules as latent variables has been already considered e.g. in [1] or [2], and also extending MLNs with neural components [3]. Similarly for what concern logic rules in the context of KGs, sota approaches like RLogic [4] or NCRL [5] should be discussed, or also LERP [6] that models contextual information from neighboring sub-graphs of entities in logic rules.\n- Too many symbols and statements come out from nowhere, without having been introduced or explained. See both other comments and questions for more details.\n\nOTHER COMMENTS\n- \"Typically, NSL tasks consider long dependencies expressed as simple logical implications like r_head → r_1 ∧ . . . ∧ r_L\" generally is the opposite, sevearl NSL systems rely on Horn clauses of the form r_1 ∧ . . . ∧ r_L → r_head. The implication here has a different meaning, or what the authors mean?\n- \"We generalize this to tasks formulated as ⟨C(h), rhead,C(t)⟩ → r1 ∧ . . . ∧ rL, where C(·) denotes the NER types of the node\", what are \"h\" and \"t\"? What is the NER type of node? These symbols and notions have not been defined before.\n- \"the first-order logic rule bodies r1 ∧ . . . ∧ rL are\", generally the body is a conjunction of literals (e.g. atoms in FOL), hence here I saw only ONE body with L relations. Is there a typo or what the authors mean by body exactly? These terms, as well as what the r_i mean have never been explained.\n- Definition 1 is unclear. So you call a Logic Semantic Variable any continuous image by a \"relaxation function\" F to a vairable zj based on the embeddings {eNERhead , eNERtail , epath} of G? What are these embeddings?, How they are obtained? What is meant by \"based on\"? This statement cannot be a formal definition without really stating/formally declaring the mentioned objects that are referred.\n- \"an semantic\" typo\n- \"LTN represents a fuzzy logic-based extension of MLNs\" LTN is not a fuzzy extension of MLN. In LTN the weights of the rules are fixed and cannot be learnt. A fuzzy extension of MLN is given e.g. by PSL [7].\n\nReferences\n[1] Maene, Jaron, and Luc De Raedt. \"Soft-unification in deep probabilistic logic.\" Advances in Neural Information Processing Systems 36 (2024).\n[2] Marra, Giuseppe, Michelangelo Diligenti, and Francesco Giannini. \"Relational reasoning networks.\" arXiv preprint arXiv:2106.00393 (2021).\n[3] Marra, Giuseppe, et al. \"Relational neural machines.\" ECAI 2020. IOS Press, 2020. 1340-1347.\n[4] Cheng, Kewei, et al. \"Rlogic: Recursive logical rule learning from knowledge graphs.\" Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2022.\n[5] Cheng, Keiwei, Nesreen K. Amed, and Yizhou Sun. \"Neural Compositional Rule Learning for Knowledge Graph Reasoning.\" International Conference on Learning Representations (ICLR). 2023.\n[6] Han, Chi, et al. \"Logical Entity Representation in Knowledge-Graphs for Differentiable Rule Learning.\" The Eleventh International Conference on Learning Representations.\n[7] Bach, Stephen H., et al. \"Hinge-loss markov random fields and probabilistic soft logic.\" Journal of Machine Learning Research 18.109 (2017): 1-67." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024mmdnsl,\ntitle={{MMD}-{NSL}: Mixed Multinomial Distribution-based Neuro-Symbolic Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xOZYU67EKL},\nnote={under review}\n}" }, "abstract": { "value": "Neuro-symbolic learning (NSL) aims to integrate neural networks with symbolic reasoning approaches to enhance the interpretability of machine learning models. Existing methods mostly focus on the long dependency problem of symbolic learning. The important challenge of complex categorization is largely overlooked. To bridge this gap, we propose the Mixed Multinomial Distribution-based NSL MMD-NSL framework. It seamlessly integrates the handling of long dependency chains and complex semantic categorization within Knowledge Graphs (KGs). By introducing a continuous Mixed Multinomial Logic Semantic Distribution, we extend traditional Markov Logic Networks (MLN) to incorporate context-aware semantic embeddings. Our theoretical innovations, including a bijective mapping between MLNs and continuous multinomial distributions, enable the capture of intricate dependencies and varied contexts crucial for NSL tasks.\nThe framework leverages a bilevel optimization strategy, where a transformer-based upper level dynamically learns mixing coefficients akin to attention mechanisms, while the lower level optimizes rule weights for learning both context and rule patterns. Extensive experiments on the DWIE benchmarking datasets demonstrate significant advantages of MMD-NSL over four state-of-the-art approaches. It achieves 10.47% higher F1-scores on average than the best-performing baseline across 23 sub-datasets. It advances continuous probabilistic models for neuro-symbolic reasoning and complex relational tasks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Neuro-Symbolic Learning", "Multinomial Mixture Distribution" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/19130829ed160366dff1faa214c7d0128724ecbb.pdf" }, "presentation": null, "primary_area": { "value": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "MMD-NSL: Mixed Multinomial Distribution-based Neuro-Symbolic Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xOmC5LiVuN
Learning General-purpose Biomedical Volume Representations using Randomized Synthesis
main
Active
synthetic data;representation learning;medical image analysis;image registration
applications to computer vision, audio, language, and other modalities
5;6;6;8
4;4;4;4
3;3;3;3
2;2;3;3
3;3;3;4
6.25
4
3
2.5
3.25
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see weaknesses section." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "-\tInnovative approach with a highly powerful general synthetic data engine, I think this work adds a lot to the discussion on “foundational” models in medicine. \n\n-\tThe method requires a comparatively small number of trainable hyperparameters compared to existing models while achieving a higher accuracy. This in my opinion is an important contribution towards sustainable models. \n\n-\tThe authors provide the source code which is a major plus for reproducibility and discuss reproducibility aspects. \n\n-\tThe authors evaluate the effect of different pretraining strategies in reasonable ablation studies. \n\n-\tThe authors honestly show negative results in the appendix." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The work proposes a new method for general (few-shot) medical image segmentation and registration. The method is based on generating realistic and varying synthetic data and training a generalist 3D segmentation and registration models on this data. The goal is to develop a method for generalizing to different imaging devices, medical procedures and conditions as well as different populations. \n\nThe data synthesis engine is capable to generate highly diverse samples and uses the totalsegmentattor dataset for shape templates. The “foundational” model is then pretrained using contrastive pretraining and finetuned in multi-modality registration and few-shot segmentation. \n\nIn experiments the authors demonstrate excellent performance in both downstream tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-\tConsidered datasets. If I am not missing any details the authors mostly evaluate their method on CT and MRI images. They even consider an MRI image as out of distribution. I think experimentation and ablation on more difficult and diverse datasets, such as 3D microscopy or Ultrasound, even if the results are not all positive would add to the discussion and validate the claim of a “general volumetric biomedical foundation model”\n\n-\tFraming of contribution. I would prefer if the authors tone down their wording about the model and clearly point out that it is a model for radiology or CT and MRI dataset and not a “general volumetric biomedical foundation model”.\n\n-\tHyperparameter selection: What range of hyperparameters was tested, and how much time or resources were spent on tuning? How were the hyperparameters for the four baseline methods chosen? Especially for fine-tuning the baselines on your datasets, which I assume is done? Clearly describing the hyperparameter search is important for reproducibility. Please correct me if I was missing such details from the main manuscript." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "* Authors compare their randomized shape template-based synthetic data engine to one that uses data with no biomedical priors and one using brain regions. Can Authors elaborate more on the intuiton for why their randomly deformed template shapes are so effective? Is there some point at which the extent of the deformation causes the representations to be less useful?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The paper is very well written - it lays out the prior work and puts the contirbute in context.\n* The approach yields representations that are both modality-agnostic and task-agnostic while removing the need for dataset-specific and anatomy-specific pre-training.\n* Authors present results of several downstream tasks using their features including multi-modality registration image registration and few-shot segmentation on which their method outperform the others compared.\n* Authors perform ablation studies on the various components of their pipeilne.\n* The Authors present extensive visualization and quantitative results in their main text, and supplementary material. Algorithms and parameters are clearly presented too which allows for further scrutiny and improved reproducability.\n* Authors are aware of the limitations of their approach and include these in the paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Authors present a method to generate highly variable synthetic imaging data which is then used to pre-train a 3D network using contrastive learning. The data generation method consists of drawing samples from a semantically labelled repository of biomedical shape templates to randomly populate an empty 3D volume. The volume is then deformed. Empty space and organ 'envelopes' are also simulated. To simulate different modalities and protocols, random intensity transformation is applied to the deformed 3D volume to yield 2 images. Typical imaging artifacts such a sbias field and blurring are simulated through random augmentations.The two images are fed into a UNet, and contrastive pre-training is performed on features at each decoder layer. An anchor point is chosen in one of the images, and all voxels of that label in both images are considered positive, and everything else negative pairs. The network yields features that can be used to finetune on other modalities and tasks. Importantly, the representations are modality-agnostic and anatomy-agnostic." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The segmentation task performed using the Authors features may yield better results than the other methods that are compared, however the result still misses significant portions of the anatomical regions they aim to segment. The features require further adjustment and extensive fine-tuning to be useful in diagnosis and treatment." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The proposed workflow involves many hyper-parameters (Figure 12) controlling the properties of generated synthetic volumes -- what is the rule of thumb for choosing them?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The overall methodology is straightforward and easy to understand. It echoes with the classical computer vision concept of invariance learning in the deep neural network era (although learned through a data-driven approach).\n\nImproved image registration results on two public image registration benchmarks and image segmentation performance on six image segmentation datasets are shown, compared to those of some existing works. \n\nThe paper is well-written with sufficient clarity. The illustrations are self-explanatory. Readers will enjoy reading it." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposes a pre-training approach for downstream tasks related to fine-grained volumetric medical data: image registration and semantic segmentation. The authors propose to learn appearance invariance and shapes of human anatomy through synthetic dense pixel volumes. In this process, 3D volumes are synthesized by randomly recomposing 3D anatomical shapes and assigning multiple sets of random pixel values, in together with synthetic noises and deformations. Pairs of synthetic volumes are used for multi-scale contrastive learning. The proposed approach demonstrates improved image registration and low-shot image segmentation results compared to some previous works. Detailed ablation studies on the pre-training configurations toward downstream performances are conducted." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Technical novelty: The core idea behind the approach is to leverage data-driven approach to learn invariance to pixel values through paired synthetic shapes and different pixel values, and to learn semantic-independent shape representation through random geometric (real-world and pure synthetic) shapes – both key ideas come from the well-established SynthMorph (random Ising shapes with random intensity + synthetic deformation for training registration networks. Hoffmann et al.) and SynthSeg (GMM-like pixel value model for repainting brain structures. Billot et al.) Despite leveraging more anatomical shapes beyond brain structures and applied to a contrastive framework, the essence remains unchanged. \n\nMany medical decisions are made not only on shape but also on subtle textures, for example, differentiating subtypes of tumors/lesions – toward which the proposed over-simplified appearance model by nature falls short. More sophisticated texture models need to be carefully studied beyond this manuscript.\n\nFor the same reason, high-level global semantic information such as relative locations between anatomical structures cannot be learned due to the nature of this approach. \n\nReal-world value: Given the increasing number of large-scale publicly accessible volumetric image datasets such as CT-RATE (Hamamci et al.), Totalsegmenter (Wasserthal et al.), and AbdomenAtlas (Li et al.), and the derived 3D foundation models, the real-world application of the proposed framework is unclear. Some of these large-scale public datasets come with fine-grain pixel-wise labels and associated radiological reports which provide additional supervision signals and text alignment potentials. The claimed generalization capability can be learned from multi-site large real-world datasets as well, through the intrinsic heterogeneity of big data and possibly through intense data augmentation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1- What are the exact steps of the \"label ensemble model\" described in Section 3? Please write elaborate description of these steps.\n2- Why do the generated images not look like real medical images? Is it because the deformation is too large? Why such \"unrealistic\" looking images are preferred rather than more realistic ones obtained with smaller deformation?\n3- How does the quality of the representations obtained by the proposed backbone compares with SoTA foundational models such as DinoV2 or SAM2?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Foundational models are showing promising performance lately; however, we lack of a 3D model that work across different modalities in medical imaging. The paper proposes a solution to this important problem using domain randomisation and contrastive learning.\n- The paper contain experiments on multiple datasets both for registration and few shot segmentation, and the results demonstrate the potential of the method.\n- The idea of combining the ideas of domain randomisation and local contrastive learning to train a generic 3D backbone is quite interesting and, to my knowledge, is novel." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes training a backbone that generalizes across different datasets using synthetically generated dataset. The proposed pre-training strategy has 3 main steps: 1) Given a large datasets of 104 annotated organs, randomly sample anatomies, deform them and create a volume by ensembling these anatomies, 2) add noise and other augmentations to the volumes that are sampled in the previous step to simulate realistic looking synthetic medical images from labels, 3) train a U-Net with a contrastive objective by sampling two volumes that share the same 3D semantic layout but differ in appearance, treating corresponding features at different encoder levels as positives and all others as negatives. The pre-trained backbone is validated on two different tasks: 3D registration and 3D few-shot segmentation; using multiple datasets. The results show the effectiveness of the proposed backbone in the experiments compared to existing methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- One issue I see in the paper is the convoluted description of the data engine step, especially the part creating the label ensemble model in section 3. I understand that this step is mainly based on the domain randomisation idea proposed in the literature. However, it is not really clear to me the steps between 201-207, especially the parts multiplying the masks with a randomly deformed sphere and randomly encasing half of the foreground-masked volumes.\n\n- The images generated in the data engine step do not seem like as real medical images. Do they look like this because the deformation is too large? It is not clear why one would prefer training the model using such unrealistic images.\n\n- The paper does not discuss the recent foundational models that show better generalization performance on many medical image datasets [1]. The downstream task performance of the representations obtained from the proposed backbone should be compared with the those obtained by the representations of a foundational model (e.g. DinoV2 [2]). For example, [3] is a recent paper that uses DinoV2 features for registration; but the same applies for the segmentation experiments. One can use the DinoV2 features for segmentation.\n\n[1] Cekmeceli et al. \"Do Vision Foundation Models Enhance Domain Generalization in Medical Image Segmentation?\"\n[2] Oquab et al. \"DINOv2: Learning Robust Visual Features without Supervision\"\n[3] Song et al. \"DINO-Reg: General Purpose Image Encoder for Training-Free Multi-modal Deformable Medical Image Registration\"" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We develop a synthetic data-driven contrastive learning framework to train a general-purpose 3D biomedical vision model that achieves state-of-the-art results in 3D multimodal image registration and few-shot segmentation." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning General-purpose Biomedical Volume Representations using Randomized Synthesis},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xOmC5LiVuN},\nnote={under review}\n}" }, "abstract": { "value": "Current _volumetric_ biomedical foundation models struggle to generalize as public 3D datasets are small and do not cover the broad diversity of medical procedures, conditions, anatomical regions, and imaging protocols. We address this by creating a representation learning method that instead anticipates strong domain shifts at training time itself. We first propose a data engine that synthesizes highly variable training samples that enable generalization to new biomedical contexts. To then train a single 3D network for any voxel-level task, we develop a contrastive learning method that pretrains the network to be stable against nuisance imaging variation simulated by the data engine, a key inductive bias for generalization. This network's features can be used as robust representations of input images for downstream tasks and its weights provide a strong, dataset-agnostic initialization for finetuning on new datasets. As a result, we set new standards across _both_ multimodality registration and few-shot segmentation, a first for any 3D biomedical vision model, all without (pre-)training on any existing dataset of real images. Our code is attached." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "synthetic data", "representation learning", "medical image analysis", "image registration" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/7525613f0417fd0f2ebc030a3070ee7395382909.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/291cfa9fbde391d0e847e42370abbfba319c8725.zip" }, "title": { "value": "Learning General-purpose Biomedical Volume Representations using Randomized Synthesis" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xOtOfdbBqK
A Drop-In Solution for On-the-Fly Adaptation of Speculative Decoding in Large Language Models
main
Active
LLM optimizations
foundation or frontier models, including LLMs
3;5;6;6
4;3;3;2
3;3;3;3
1;2;3;2
2;3;3;3
5
3
3
2
2.75
-0.866025
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "How scalable are the adaptive methods presented, Further clarification on scalability could help evaluate the practicality of these techniques in diverse deployment scenarios." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Practical Contribution: The paper presents a solution for a well-acknowledged challenge in LLM inference—latency. By offering a drop-in solution for dynamic adaptation, this work has significant practical value, especially for real-time applications in large-scale deployment scenarios.\n\n2. The adaptive approach bypasses the need for extensive offline training and tuning, unlike some previous methods. This makes it easier to adopt in diverse settings where the underlying hardware, model, and task configurations may vary frequently.\n\n3. The paper effectively articulates the limitations of static speculative decoding parameters and demonstrates the need for an adaptive, flexible approach to speculative decoding." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces an \"on-the-fly\" adaptation technique for speculative decoding in large language models (LLMs), aiming to reduce inference latency dynamically without requiring prior model-specific training or offline benchmarking. The authors propose a framework that selects optimal parameters for speculative decoding during runtime, specifically the draft model and speculation window size γ. By leveraging adaptive techniques such as online optimization, finite state machines, cache-enhanced FSMs, and reinforcement learning, the approach achieves up to 3.4× speed improvements over default autoregressive decoding and outperforms standard speculative decoding methods by 3.55% to 16.48%." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Scope for Enhanced Comparative Analysis: The paper covers multiple adaptive techniques to optimize speculative decoding, but a broader comparative analysis could provide more clarity. Highlighting conditions where each method performs best would further aid in understanding the practical strengths and limitations of each approach, helping readers assess their applicability across diverse scenarios." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What is the value of window size $\\gamma$ for the standard speculative decoding baseline? Is the window size $\\gamma$ for the standard speculative decoding baseline in each dataset set as the optimal value obtained by searching through all possible fixed $\\gamma$ values?\n\n2. `BLOOM` models achieve \\~70% throughput improvement in `xsum` dataset, while having no speed-up compared with default LLMs without speculative decoding. Why is this throughput improvement so high compared with others (\\~10%)?\n\n3. Is it possible to provide results on a comprehensive chat dataset?\n\nMinor:\n\n4. How is the vector embedding of a prompt calculated (like $u$ in line 321 and $b$ in line 347)?\n\n5. What is `online predictive model construction` in line 382?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The idea of adjusting window size and draft model dynamically in the decoding process is novel and can be helpful in maximizing the decoding speed, especially when the speculation accuracy varies in the generation process. \n\n2. The cost of the estimation method is rather small, making it easy to integrate with existing speculative decoding methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces solutions to optimize speculative decoding configurations on the fly. For window size, the paper proposes online window size optimization based on speculative accuracy estimation, FSM, or RL. For the choice of the draft model, the paper proposes the estimation of speculative accuracy estimation from various factors. Therefore, the configurations can be dynamically adjusted between speculative decoding steps based on history.\n\nExperimental results over various devices and benchmarks demonstrate speed improvements compared with the standard speculative decoding and a speed-up compared with default decoding procedures." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The improvement is somehow marginal. The window size selection method generally achieves improvements of less than 10% and less than 1% in some cases. The proposed draft model choice method leads to less than 1% improvement in many cases.\n\n2. The method is only tested in domain-specific tasks (math/code/finance QA/summarization) and is not evaluated on comprehensive chat datasets like `ShareGPT`, which raises questions about its applicability to realistic chat scenarios." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Baselines seem especially weak and some other baselines are not compared. For example, Medusa and EAGLE, they are currently popular approaches that has been compared in numerous papers; all of these methods utilize speculative decoding techniques and demonstrate strong performance.\n2. Although I find the concept of a dynamic speculation window size intriguing, I am skeptical about its practical application value. Recent studies have already adopted tree attention or tree decoding-related technologies, they do not require a speculation window size and have achieved significant speedup. Could you discuss how your approach compares with, or could potentially be integrated into, tree attention or tree decoding technologies? \n3. In my understanding, Table 2 presents the results using adaptive window size selection, and Table 3 presents the results using draft model selection. Why not conduct an experiment to show the results of using both techniques simultaneously? Additionally, could you specify which draft model was used for the 'w/o draft selection' condition in Table 3?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper is well-written; the author has explained their motivation and methods in detail.\n2.The article analyzes the factors influencing the acceleration of large models through speculative decoding, using rigorous formula derivation, and demonstrates the importance of employing dynamic sampling windows and dynamically selecting the draft model.\n3. The article conducted ample experiments to demonstrate the effectiveness of their method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This article primarily focuses on accelerating inference for LLMs. It represents an improvement over existing speculative decoding methods by dynamically adjusting the speculation window length and selecting different draft models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Baselines seem especially weak and some other baselines are not compared. For example, Medusa and EAGLE, they are currently popular approaches that has been compared in numerous papers; all of these methods utilize speculative decoding techniques and demonstrate strong performance.\n2. Although I find the concept of a dynamic speculation window size intriguing, I am skeptical about its practical application value. Recent studies have already adopted tree attention or tree decoding-related technologies, they do not require a speculation window size and have achieved significant speedup. Could you discuss how your approach compares with, or could potentially be integrated into, tree attention or tree decoding technologies? \n3. In my understanding, Table 2 presents the results using adaptive window size selection, and Table 3 presents the results using draft model selection. Why not conduct an experiment to show the results of using both techniques simultaneously? Additionally, could you specify which draft model was used for the 'w/o draft selection' condition in Table 3?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "My main question is why focusing only on single-sequence speculation rather than the tree-based speculation structure, which is the current best practice for speculative decoding. More specifically, how does your approach compare with **EAGLE-2**, which leverages a similar idea (adaptive speculation) but on draft trees rather than a single speculation sequence, which results in a much higher speed-up rate? The simplicity of your solution can be a bonus if it can be generalized to tree-based speculation and achieve a higher speed-up than EAGLE-2. \n\nSome minor suggestions:\n\n1. Try formulating the verification latency as a function of the speculation structure from a more systematic perspective. For example, make it a function of the speculation sequence length or speculation tree structure, this may also depend on the underlying hardwares' specifications.\n2. Include some ablation studies to show how different configurations affect the effectiveness of the speculation accuracy estimator. For example, how different $h$ (recent history length) values can affect your speculation accuracy prediction for online window size optimization, and how different linear model (different vector lengths or random variables) can affect your speculation accuracy prediction for estimating the speculation accuracy for different draft models." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The problem of adaptive speculation length is well-motivated as the overhead of speculation is non-negligible. As the paper indicated, an over-optimistic speculation length would have a high overhead if there is an early mismatch, and an over-conservative speculation length would result in fewer speed-up potentials. \n2. Though the objective formulation and history-based parameter estimation are not new and similar as in [1], the problem focus is slightly different ([1] is not focusing on the general LLM speculative decoding), so the formulation can be of interest to the general LLM speculative decoding audience. \n3. Using a linear system to model the speculation accuracy across different draft models is new.\n\n**reference**\n\n[1]. Zhang, Z., Zhu, A., Yang, L., Xu, Y., Li, L., Phothilimthana, P. M., & Jia, Z. Accelerating Iterative Retrieval-augmented Language Model Serving with Speculation. In Forty-first International Conference on Machine Learning." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduced an online optimal speculation length scheduling approach for efficient speculative decoding. It first proposes an objective, as in Eq. 1, to formally capture the speculation accuracy and latency tradeoff. Given the objective, an accurate estimation of speculative/verification latency and speculation accuracy is required. For a single draft model, the authors propose to use a history window to estimate the latency through online profiling and the speculation accuracy through MLE. For selection among multiple draft models, the authors further propose a parametric method with a linear model to predict the speculation accuracy for different draft models. Empirically, the paper shows that their approach can consistently outperform SpecDec++ over various target models and datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper's approach only works for speculative decoding with a single sequence, while the current state-of-the-art practice is doing speculation with a **tree-based** structure. More specifically, [1] can achieve up to 3.01x speed-up for LLaMA-2-70B and 3.51x speed-up if dynamically adjusting the draft tree structure is further incorporated as in [2], both of which have outperformed the speed-ups claimed in the paper. \n2. In practice, the verification latency is usually not constant but depends on the speculation structure to be verified, so it would be more accurate to verify the property of verification latency under various speculation structures and formulate the latency of verification as a function of the speculation structure, e.g., sequence length in a single-sequence speculation structure or width and depth for a tree-based speculation structure. \n3. As the prediction accuracy for the linear system to estimate draft model speculation accuracy is pretty critical, a more comprehensive ablation study on the choices of different linear models (e.g., different random variables or different vector lengths) is crucial for understanding how you chose the optimal linear system among different possible configurations.\n\nThe second and third points are minor issues, while the first point is a major issue in my point of view.\n\n**references**\n\n[1]. Li, Y., Wei, F., Zhang, C., & Zhang, H. EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty. In Forty-first International Conference on Machine Learning.\n\n[2]. Li, Y., Wei, F., Zhang, C., & Zhang, H. (2024). Eagle-2: Faster inference of language models with dynamic draft trees. arXiv preprint arXiv:2406.16858." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024a,\ntitle={A Drop-In Solution for On-the-Fly Adaptation of Speculative Decoding in Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xOtOfdbBqK},\nnote={under review}\n}" }, "abstract": { "value": "Large Language Models (LLMs) are cutting-edge generative AI models built on transformer architecture, which tend to be highly memory-intensive when performing real-time inference. Various strategies have been developed to enhance the end-to-end inference speed for LLMs, one of which is speculative decoding. This technique involves running a smaller LLM (draft model) for inference over a defined window size, denoted as $\\gamma$, while simultaneously being validated by the larger LLM (target model). Choosing the optimal $\\gamma$ value and the draft model is essential for unlocking the potential of speculative decoding. But it is difficult to do due to the complicated influence from various factors, including the nature of the task, the hardware in use, and the combination of the large and small models. \nThis paper introduces *on-the-fly adaption of speculative decoding*, a solution that dynamically adapts the choices to maximize the efficiency of speculative decoding for LLM inferences. As a drop-in solution, it needs no offline benchmarking or training. \nExperiments show that the solution can lead to 3.55-16.48\\% speed improvement over the standard speculative decoding, and 1.2-3.4$\\times$ over the default LLMs." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "LLM optimizations" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/6aceabb3526ca377b46d6f49a845ad70a4f4284b.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/1111ee6969dde57dc7fe4df692dd6a81686f01a4.zip" }, "title": { "value": "A Drop-In Solution for On-the-Fly Adaptation of Speculative Decoding in Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xP1radUi32
Endless Jailbreaks with Bijection Learning
main
Active
jailbreaking;redteaming;AI safety;AI alignment;adversarial robustness;adversarial attacks
alignment, fairness, safety, privacy, and societal considerations
3;5;5;6
4;4;3;4
2;3;3;3
2;3;2;2
3;1;3;3
4.75
3.75
2.75
2.25
2.5
-0.132453
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. \"We search for universal attack mapping candidates from our earlier evaluations by selecting attacks which produce a strong response for a particularly malicious intent. For a selected mapping, we generate the fixed bijection learning prompt and evaluate it as a universal attack on HarmBench\" (quote starting from line 342). I was confused by this phrasing, is the scaling plot in Figure 4 for best-of-n at n=1 done for a single random sample of a language, or is it done for the single best language subselected from all the prior experiments?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The algorithm is clear, simple, and admits random sampling for endless bijections. \n2. The analysis is comprehensive. I really appreciated the scaling analyses for 1) the n in best-of-n and 2) the ASR vs model capabilities frontier showing that more capable models may be more susceptible." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper discusses an approach to jailbreaking via bijection learning. Specifically, they generate a random transformation over characters that shifts the input prompt out of the safety fine-tuning distribution. They find that models prompted to answer inputs in bijection format are more likely to output harmful content compared to standard model inference." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The main ideas presented in this paper have been identified in prior works under mismatched generalization [1] and lack of robustness to out-of-fine-tuning distribution prompts such as low-resource languages [2,3]. [1] also makes the observation that transformation-based jailbreaks benefit from increasing model scale. As such, this paper extends these ideas rather than introduces them. \n2. I believe the comparison in Figure 3 to the baselines is not apples-to-apples since bijection learning performs a best-of-6 while the baselines are effectively best-of-1. I think a more fair comparison is either doing best-of-1, or having a combined baseline that's an ensemble of six reasonable baselines.\n\n[1] https://arxiv.org/abs/2307.02483\n[2] https://arxiv.org/abs/2310.02446\n[3] https://arxiv.org/abs/2309.10105" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the weakness part above. Could the authors explain more on how their proposed methods differ from the existing ones?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. ASR of the proposed method is high on many frontier LLMs.\n2. The authors did comprehensive experiments to verify the effectiveness of their method. The results reported in Section 3.3 is interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a general framework for jailbreaking attacks with bijection encoding ciphers. Experiments show that the attack is effective on a wide range of frontier language models. They also find out that the bijection cipher attack is more effective on larger and stronger language models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The novelty of this work is questionable given many existing cipher based jailbreaking attacks. It seems the only difference between this paper and existing works is that this paper proposes to use a system message to customize general cipher encodings. I'm not confident about whether the contributions are enough for ICLR.\n2. Comparisons between many other cipher-based jailbreaking attacks are missing, including but not limited to:\n[1] When “Competency” in Reasoning Opens the Door to Vulnerability: Jailbreaking LLMs via Novel Complex Ciphers\n[2] Jailbreaking Large Language Models Against Moderation Guardrails via Cipher Characters" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "It would be good to see additional discussion on how LLM developers might defend against this style of attack." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The authors present an interesting attack that demonstrates the surprising ability to scale with LLM power: stronger LLMs appear more susceptible to this attack. Moreover, the proposed attack achieves key desiderata of jailbreaks: against a black-box target, universal/automatable, and scalable. These qualities make \"bijection attacks\" a valuable benchmark for LLM developers to consider when evaluating safety." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new method for jailbreaking LLMs by means of communicating in an encrypted format.\nThe LLM is first given a simple encryption algorithm for encrypting and decrypting strings.\nThen, the user and LLM communicate for several rounds in this encrypted format.\nAfter some time, the user will ask a malicious query (e.g., \"Design a weapon\") in this encrypted format; this paper finds that the LLM response, when decrypted, is often a jailbroken one.\nThis style of attack, dubbed a \"bijection attack\", is more successful against stronger LLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "In my opinion, this paper does not have clear weaknesses. However, given the state of jailbreaking research, I do not think that this style of attack paper is scientifically or technically exciting. To change my opinion, I would like to see some deeper technical insights + experiments, possibly with the authors' proposed defense strategies in Section 5 --- but this may be unreasonably ambitious in the rebuttal time frame. While my impression leans on the negative side, I am okay with accepting this work if the other reviewers do not have strong objections." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Most of my questions are in the weaknesses section, but here are some other questions/discussion points.\n\n- What does “endless” mean? How is it different from any “adaptive” attack (like PAIR or random search suffixes) that will craft many potential jailbreaks until it finds one that works? I’m not sure emphasising this makes sense. Maybe you reframe it as an “Adaptive in context learning attack”?\n- In the abstract, what does universal mean? Is it transferred across harmful requests or across models? Do you have a % of how the same bijection template transfers across requests and across models?\n- Multi-turn conversation history - do you think it would be best to use the term few-shot learning here and then, in methodology, talk about how many shots you use (it looks like it is 10-shot?). Also, you call it “teaching examples” later. I would use few-shot since it is clear what it means.\n- What defenses work / don’t work against your attack?\n- Intro: I wouldn’t describe perplexity filtering, paraphrasing, etc as an alignment technique like RLHF. I’d describe it as an adversarial defense against jailbreaks.\n- Maybe add the GPT-4o rewriter to correct typos in Figure 1, so it is clear how it works without needing to read the paper.\n\nThis paper has the potential for a much higher rating, but not in its current form. I would happily raise my score if the claims I mention in the weaknesses section are better explained and the results are significantly simplified in the figures and presented well. In addition, I’d like to see the comparison to adaptive black-box attack baselines like PAIR with a comparable attack budget." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- **Simple and scalable attack that exploits in-context learning in LLMs**: The bijection learning attack method, allows you to sample a large number of text mappings that transform the input text into a bijection language. This allows you to sample them until you find a successful attack, which is a powerful red-teaming scheme.\n- **High ASR on frontier models**: The attack achieves an ASR of 94% on Claude 3 Opus, which is impressive.\n- **In-depth analysis of how effective the attack is with different scales**: The authors find that the attacks are stronger with scale. Smaller models fail to learn bijections, but the attack can be tuned for difficulty by changing the “fixed size” to work on less capable LLMs.\n- **Contributions to Safety Research**: The paper identifies a new risk factor in AI safety that scales with model capabilities, emphasising the dual-use nature of advanced reasoning in LLMs. It underscores the necessity for evolving safety mechanisms that match these capabilities, providing crucial insights for AI developers on robust mitigation strategies." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces an adversarial attack called \"bijection learning\" that exploits the in-context capabilities of large language models (LLMs) by teaching it a simple cipher to bypass safety guardrails. By teaching LLMs an arbitrary string-string mapping, the attack encodes harmful queries in a \"bijection language\" which can easily be decoded back to English. The approach is effective across various models, with the attack's potency increasing with model capabilities. The authors demonstrate this with extensive experiments, showing high Attack Success Rates (ASR) on a range of safety-trained frontier models such as Claude and GPT-4o. The paper argues that as models become more capable, they become more susceptible to cipher attacks like bijection learning, posing significant safety challenges." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Claims need to be better explained or backed up with a citation:**\n - “Scale of models and safety is under explored”. I’m not sure this is true because most jailbreak attack papers text over various scales. MSJ, Jailbroken, etc, all look at this (or they at least look at how safety changes with scale). You need to make this claim more specific because I agree that using the improved capability to attack via ciphers is underexplored.\n - “Model capability increasing also widens the attack surface”. I think this is unclear. It does introduce new novel vulnerabilities, but it could also be the case that the total attack surface shrinks even when there are these new cipher-based vulnerabilities. So again, I would make this claim more specific and less general (unless you have data to back it up or can cite a paper that does show this generally).\n - Is your method for jailbreaking novel? Ciphers are not new, but perhaps your version of bijection learning in context is novel. I think it is worth being clearer on what is novel and also doing a better job at comparing and contrasting in the related work section.\n - Is the approach scale agnostic? Perhaps to a certain point but this would break down? What is the smallest LLM you tried the approach on? You say “new language model risk factors may emerge with scale” but also say the approach is “scale agnostic”. I think making the story clearer and not mixing the two here would be good. Also, it is important not to confuse bigger models with improved alignment fine-tuning, and it can be tricky to make claims about this since labs do not release details of their training.\n - “while many other jailbreak methods only work on smaller models” - can you cite this or explain more? Most jailbreak techniques, e.g., GCG, PAIR, TAP, etc, work well across all model sizes, so I am not sure this claim is true.\n - “more powerful on more capable models” - can you quantify this?\n - “arguable the safest frontier model at time of writing” - can you cite this result or remove it? Gemini 1.5 Pro is very competitive and I’m not sure which is ultimately better.\n - “endless mappings that yield successful jailbreaks” - how many templates did you test, and how many of them worked? It would be good to quantify this in your contributions.\n - “certain complexities of bijection learning systematically produce certain failure models, giving further insight into our scaling laws” - what is the complexity of bijection learning? What are the failure modes? Can you give a one-sentence summary to help ground this claim?\n - “more severe harmful behaviors more frequently” - how much more? Can you give an idea of the jump from 3.5-level models to 4? How do you know that it is bijection learning that induces more harmful behaviour? It could simply be because the model is more capable. Perhaps compare “egregiousness” with other jailbreaks and see if bijection learning induces more harmful ones. Otherwise, this claim isn’t interesting.\n - Say more about your scaling laws in the contribution section - do they allow you to forecast future capabilities? What equation are you fitting?\n - “harmful model responses produced by our attack are thus fully reproducible” - have you tested this on all frontier models? Even at temp 0, output diversity exists, so be careful of your claim here.\n - “Our work is the first to produce attacks that are simultaneously black-box, automated, and scale agnostic” - I don’t think this is the case. PAIR and TAP are prime examples of methods that fit these criteria.\n - In the results section: “[whitebox methods] fail to transfer to the newest” - could you cite this? There is some GCG transfer. If you have numbers, then include them in the paper (even if it is in the Appendix)\n- **Some discussion points can be improved:**\n - I think you can motivate your work better e.g. many attacks are caught by defenses such as output classifiers, but your work can bypass these easily by using a cipher that the output classifier won’t understand. However, it is unclear if you use an input+output classifier if they will catch your attacks or not. The classifier must be at the same capability level as the generation model.\n - The discussion of the “bijection jailbreaks can be universal” can be improved by making the point that the algorithm itself leads to a universal attack and relies on a “fuzzing” technique that samples a prompt template until one works. See LLM fuzzer paper https://www.usenix.org/conference/usenixsecurity24/presentation/yu-jiahao.\n- **Related work lacks comparing and contrasting with their work:**\n - Safety of LLMs - this does not compare and contrast with your work. Is it even relevant\n - Adversarial attacks against LLMs - please compare and contrast more. Maybe separate the cipher work into its own section and contrast it in more fine-grain detail.\n - Adversarial robustness in LLMs - is this relevant to your work since you don’t look into defenses?\n- **Lack of threat model:** Why are you working on black-box attacks on frontier models? Why is this more impactful to work on than white-box attacks? (I think it is)\n- **Improving method explanation**:\n - Desiderata section. Universal and/or automated - this sentence is hard to parse, perhaps separate into two bullets\n - Fixed size (section 2.2) - this bullet point makes it hard to understand what you mean. It is a simple concept, and I think it could be explained more clearly and with fewer words. I think it would be easier to define complexity, C, as the number of characters that map to something else. Then it is easy to understand that C=0 means there is no change, and C=26 means you change each letter for something else.\n - The false positive rate is not measured. Report the false positive rate when you talk about it. Also, do you check every single response in all your experiments? Some more clarity on your method here (including a rubric your human evaluators follow) would be good as it impacts the ASRs throughout the paper a lot and will help people replicate.\n - Add how you filter HarmBench, e.g. do you just use standard direct requests and remove copyright/context ones?\n- **Lack of black-box baselines and explanation of baselines used**:\n - There should be more baselines, e.g. vs PAIR and TAP. You could evaluate these with the BoN methodology too. I expect PAIR to get similar ASRs on GPT4o-mini. Without these, it makes\n - Please explain your implementation for ascii, caeser-cipher, morse code and self-cipher. (I suggest significantly cutting down section 5 to make room). The reader is left guessing how these work. Is it fair to compare them when you have a different attack budget for bijection learning?\n- **The presentation of results is poor as the figures are too small and contain too much data.** In general, I’d recommend thinking about the main insight you want the reader to take away from each figure and majorly simplifying it so that is the case.\n - Figure 3 - this is a little messy. Why do we need the tables on the right when the bar charts have all the details? I think it would be great to have a bar chart with the best-fixed size for each bijection type and compare directly against baselines for each model (using the full 320 behaviors since HarmBench-35 isn’t large enough). Then, a separate figure that ablates the fixed size. Then, it makes it easy to highlight the insights you’d like to convey in the caption and main text. What is the attack budget for these?\n - I think Table 1 contains the exciting results. I’d suggest leading with this before Figure 3. I’d also like to see comparable PAIR or TAP ASRs on the HarmBench 320 and AdvBench 50 set in the table. I think it is less important to show the fixed size and attack budget and just show the ASRs vs the baselines. Also, maybe fix the attack budget to the same as the budget for PAIR so it is more directly comparable. Also, how did you choose the current attack budgets? Was that the plateau point?\n - Figure 4 left - This is an awesome plot, and I think you should make a bigger deal out of it.\n - Figure 5 - I think this could be better as a line plot. Perhaps choose to plot either ‘digit’ or ‘letter’ rather than both to simplify. Share legend for all. Also, I think just having three lines: successful attack, refusal, and unhelpful non refusal would help simplify this. Potentially, just plotting unhelpful non-refusal for each model would get your point that smaller models can’t learn complexity bijections the best.\n - Figure 6 - share the legend so each plot is bigger. The main point you are trying to make is that the capability degrades as fixed size increases, but this message is easily lost with the amount of information you present. I’d recommend simplifying - just show it averaged over models and maybe drop some of the multi-digit results. You can put this plot in the appendix to refer to some nitty gritty details.\n - Figure 7 - I don’t think this is a scaling law. It is more of a Pareto frontier plot where you want to show the trade-off in two metrics. A scaling law would involve the amount of computing, data size, or size of the model. Did you experiment with different fits? A quadratic might not be as good as a higher-order polynomial. Why is a quadratic the right fit for this data?\n - General plotting advice: Please use error bars for all plots that take into account the sample size used (this can just be a standard error of a proportion that doesn’t involve rerunning many seeds). Use a colour-blind pallet, and make the font in figures a similar size as the font of paper." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We jailbreak frontier language models with a novel state-of-the-art encoding-based jailbreak, and we derive inverse scaling laws regarding the efficacy of our jailbreak." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024endless,\ntitle={Endless Jailbreaks with Bijection Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xP1radUi32},\nnote={under review}\n}" }, "abstract": { "value": "Despite extensive safety training, LLMs are vulnerable to adversarial inputs. In this work, we introduce a simple but powerful attack paradigm, bijection learning, that yields a practically endless set of jailbreak prompts. We exploit language models' advanced reasoning capabilities to teach them invertible languages (bijections) in context, pass encoded queries to the model to bypass built-in safety mechanisms, and finally decode responses back into English, yielding helpful replies to harmful requests. Our approach proves effective on a wide range of frontier language models and harm categories. Bijection learning is an automated and universal attack that grows stronger with scale: larger models with more advanced reasoning capabilities are more susceptible to bijection learning jailbreaks despite stronger safety mechanisms." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "jailbreaking", "redteaming", "AI safety", "AI alignment", "adversarial robustness", "adversarial attacks" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/fb4e9498d6f73231a0bed27dd1bc33309c76124a.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Endless Jailbreaks with Bijection Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xPO6fwvldG
UniRestore3D: A Scalable Framework For General Shape Restoration
main
Active
Shape Restoration;3D Reconstruction;Diffusion Model
applications to computer vision, audio, language, and other modalities
5;5;6;6
4;3;4;3
2;3;3;3
2;3;2;3
3;3;3;3
5.5
3.5
2.75
2.5
3
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The problem is useful, for preprocessing noisy 3D scans. \nThe results seem good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a unified model for general shape restoration, aiming to recover 3D shapes with various defects, such as incompleteness and noise. By standardizing data representation and constructing a large-scale dataset of diverse shape defects, the authors develop an efficient hierarchical generation model and a noise-robust encoder, demonstrating improved applicability and scalability across multiple restoration subtasks on various datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are no visual comparisons for other methods, making it hard to compare the improvements. \nAll data is from the proposed dataset, how about results on real noisy 3D scans?\nHow about the results on other datasets? Does the network overfits on training set?\nThere is no overview figure of the whole method, from Figure 3-4, it is still hard to follow the method design. \nFor different noises, like low-resolution, noisy completion, noisy refinement etc., do they share the same pipeline/network?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Questions:\n- Given that the patch-wise encoders produce sparse feature grids, how is the feature alignment being computed? A shape that has missing geometry will not have the same sparse voxel grid as the intact shape, so is the feature alignment loss only being computed for voxel indices which exist in both sparse feature grids?\n\n- Since all the encoders, decoders, and LDM use sparse convolutions, how is the model able to generate missing structures from the sparse feature grid? Is some form of structure prediction module, similar to Ren et al. (2024) and Huang et al. (2023), being used in the decoder?\n\n\nCorrections to make:\n- In Table 6 of appendix, the best CD for the Lamp category and the best IoUs for the Bed and Bench category are incorrectly bolded.\n- In Table 7 of appendix, the best IoUs for the Bathtub, Basket, and Printer categories are incorrectly bolded. \n\nReferences:\n- Jiahui Huang, Zan Gojcic, Matan Atzmon, Or Litany, Sanja Fidler, and Francis Williams. Neural kernel surface reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4369–4379, 2023. \n- Xuanchi Ren, Jiahui Huang, Xiaohui Zeng, Ken Museth, Sanja Fidler, and Francis Williams. Xcube: Large-scale 3d generative modeling using sparse voxel hierarchies. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4209–4219, 2024." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The proposed dataset is an improvement over many of the existing datasets used for evaluating shape completion. Many previous datasets are constructed from the few largest categories from the ShapeNet or PartNet dataset and typically only contain a single type of defect (i.e., incompleteness). On the other hand, the proposed dataset is much larger scale and contains greater diversity as it is constructed from a diverse set of shape datasets, while also modeling more realistic shape defects present in object scans (e.g., noise + incompleteness).\n\n- The qualitative results seem to suggest the proposed approached does a better job than previous approaches at respecting the fine geometric details present in the partial scans while producing higher quality completions of the missing regions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "A unified shape restoration model is proposed for handling multiples types of defects (e.g., incompleteness, noise, sparsity) present in scans of 3D objects. The unified shape restoration model is composed of a patch-wise encoder for locally encoding defective shapes, improving generalization capabilities, and a hierarchical latent diffusion model used for generating intact shapes. To enable robustness to various defect types and improve generalization to novel objects, the proposed model is trained on a newly constructed large scale dataset which contains a variety of shape restoration subtasks. Furthermore, a two-staged training scheme is proposed for accelerating training on high-resolution shapes." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While a high level description of the H-VAE encoder, H-VAE decoder, and H-LDM are provided, there is no description of the actual architectures for these modules. I would expect to see a more detailed description of what the architectures are at the very least in the appendix.\n\n- A scalable training strategy is posed as a contribution; however, there does not exist any evidence that the proposed strategy is more efficient. A table containing training times between the two strategies would demonstrate the benefits more clearly. If the model truly can’t be trained using the normal/standard strategy, the authors could always train on a subset of the data and report training times for that subset or extrapolate what the training time would be for the entire dataset.\n\n- The model does not seem to generalize all that well or be robust to unknown categories. In Table 1, the model obtains a 2-5x worse MMD/AMD on ABO across the different tasks and about a 100x worse MMD/AMD on GSO. This large drop in performance is observed even for the noisy refinement task which should be an easier task than noisy completion.\n\n- The quantitative results don’t really demonstrate that the proposed model is better. In Table 2, the proposed approach is the only model pre-trained on the large scale dataset, which isn’t really a fair comparison for evaluating model performance. Even when pre-trained on their dataset the model performs similarly or worse to NeuSDFusion, and to actually outperform NeuSDFusion they had to artificially add more task specific examples (noise-free completions) to the pre-training data. It’s not clear whether all baseline methods would improve and outperform the proposed method from a similar pre-training. Instead the authors should add a comparison of their approach with no pre-training to Table 2 to fairly evaluate model performance.\n\n- Similarly in Table 3, a comparison needs to be added with no pre-training. The model slightly outperforms SC-Diff, but this improvement could just be from pre-training on a large scale dataset." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* Line 429: typo: tsdf -> TSDF\n* Please provide training hardware, memory and time requirements. Same for inference.\n* Paper claims that the denoiser can be conditioned on \"text or images\", which was never demonstrated. Claim must be removed or proven in the given context. If it is not used it should be removed from the method (especially equation 1). If it is used its effect must be evaluated. This is my biggest concern currently." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* Sane method with reasonable justification for each step.\n* The paper presents a new benchmark for shape completion using TSDFs, which could potentially be very useful in the field, as currently used benchmarks are either not tailored to TSDFS or extremely low resolution. \n* The detailed account for all the work necessary to compare to the existing 'benchmarks' beautifully highlight the hideous sate of the benchmarks in this field. Every single paper that is published uses their own internal representation and rely on a very specific data preprocessing. They all have to compare to absolutely terrible benchmarks like Patch Complete or 3DQD - burning so much time and brain of talented people just to figure out how to correctly recreate the benchmark to work with their internal data and somehow stay comparable to the 32^3 numbers. It is an absolute shame. I think the paper did this well, still I wonder what brilliant work the authors could have done with the time they had to waste on this.\n* Diffusion based method can produce multiple suggestions which the user can chose from." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a dataset/benchmark that unifies shape completion, denoising and superresolution on TSDFs and a hierarchical latent diffusion based method to solve all those tasks jointly.\nThe dataset is assembled from a variety of dataset, including Objaverse using a unique pipeline to get from non-manifold-meshes to ground-truth and different versions of incomplete or corrupted TSDF grids.\nTheir unified restoration method uses two main modules: A hierarchical (multi level) Variational Autoencoder and a hierarchical (multi level) latent diffusion model. All models (except for the coarsest level) are implemented using sparse convolutions, which only act on SDF-values close to the surface. The VAE is trained separately from the diffusion model in two stages. First on the clean shapes. In a second step the VAE is refined on the corrupted shapes, where the latents are guided to be close to the ground-truth latents of the clean objects.\nThe latent diffusion model runs from coarse to fine. It is conditioned on the latents of the corrupted shape and the occupancy grid predicted in the next coarser level. The diffusion model is trained to denoise the latents into the clean grond truth latents. The occupancy grid for the next level is extracted by decoding the latents into a TSDF using the VAE-decoder and extracting the geometry. On the finest level the decoded TSDF is the final result of the method.\nThe method is validated on a test-split of the own dataset and also on the 3DQD and PatchComplete benchmarks and demonstrate sota performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The shape representation (TSDF) is mentioned for the first time on page 4. Your readers might have very different backgrounds and very different understanding of 3D-Shapes. Some might think you work on 3D-Pointclouds, others think you work with meshes, yet others might think you for sure use binary occupancy grids or unsigned distance fields neural representations or whatever. In the related works you put yourself next to all kind of methods without pointing out this major difference. In my humble opinion the choice of shape representation is a central feature, which defines it's usefulness for different problems. It should be communicated very clearly - if not in the Title at least in the Abstract or at least in the Introduction. Also your resolution of 256 is really nice and also could be mentioned earlier. \n* No information about training time of the modules is given. No information of inference time and memory requirement for inference are given.\n* Diffusion based method is not usable for online real-time applications (during 3D-scanning)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "(1) For dataset creation, does every intact shape in the database only employs one of the four subtasks, or some shape will be simultaneously corrupted by multiple subtasks together? Also, for the ablation study in table 4, does the \"joint training\" has the double amount of data compared to \"noisy refinement only\" and \"noisy completion only\"? In other words, does the improvement of the proposed method comes from scale-up of data amount or from joint subtasks learning? \n\n(2) I found it hard to understand the importance of the \"scalable training strategy\" proposed in section 4.1. Why it is \"scalable\"? It is mentioned in the paper that \"it significantly slows down training due to the need to load high-resolution defective shapes\". Isn't it true that loading intact shape should also slows down training in the first stage? Also, the proposed method still requires loading defective shapes in the second stage - so what is improved here?\n\n(3) Will the proposed dataset open-sourced in the future?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The attempt of unified framework for 3D shape restoration under different defections is interesting.\n\n- The proposed large-scale shape restoration dataset is very useful to the community.\n\n- Outstanding results on multiple shape restoration tasks which outperforms state-of-the-art.\n\n- Good writing and enough details in the appendix for reproducible research." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed a new framework for general shape restoration, which aims to handle multiple types of defection in a single framework. The proposed framework supports four types of shape defection, namely noise / noise-free completion, noise refinement (i.e., denoising), and super-resolution. The proposed new framework including a new large-scale dataset with various types of defections, a multi-scale defection-robust shape encoder, and a conditional latent diffusion model for shape generation. Experiments on multiple shape restoration tasks demonstrated the effectiveness of the proposed framework." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While I really appreciate the efforts for building such a large and general framework (and the result is really promising), I found it hard to understand the key challenge and insights for building such a framework. Specifically, \n\n(1) What is the key challenge for building the large scale dataset for shape restoration? The process mainly consists of randomly adding different type of pre-defined perturbation on existing large-scale 3D object datasets. It is surprising (or maybe I missed some work?) that no one ever did this before.\n\n(2) As mentioned in the introduction section, \"These diverse tasks require different model capabilities, making it challenging to design a \nunified model for general shape restoration\" - how does the proposed framework addressed this capabilities requirement? It reads something like \"we just merged all the data with different task together for training, and it worked.\" For example, one direct approach of training a unified framework would be just simply merging all existing dataset for different tasks and then perform joint training. I wonder what the result would be, as it would give us more insight regarding why the proposed framework has good performance - is it simply because of the amount of dataset in the proposed framework is larger, or because of the carefully designed method of defection synthesizing pipeline, or other factors? \n\n- There are some unclear part regarding the dataset and the experiment setup. Please see the \"Questions\" section below." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A unified shape generative model with a scalable training strategy for restoring various forms of defective shapes." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024unirestored,\ntitle={UniRestore3D: A Scalable Framework For General Shape Restoration},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xPO6fwvldG},\nnote={under review}\n}" }, "abstract": { "value": "Shape restoration aims to recover intact 3D shapes from defective ones, such as those that are incomplete, noisy, and low-resolution. Previous works have achieved impressive results in shape restoration subtasks thanks to advanced generative models. While effective for specific shape defects, they are less applicable in real-world scenarios involving multiple defect types simultaneously. Additionally, training on limited subsets of defective shapes hinders knowledge transfer across restoration types and thus affects generalization. In this paper, we address the task of general shape restoration, which restores shapes with various types of defects through a unified model, thereby naturally improving the applicability and scalability. Our approach first standardizes the data representation across different restoration subtasks and constructs a large-scale dataset with diverse types of shape defects. Next, we design an efficient hierarchical shape generation model and a noise-robust defective shape encoder that enables effective impaired shape understanding and intact shape generation. Moreover, we propose a scalable training strategy for efficient model training. The capabilities of our proposed method are demonstrated across multiple shape restoration subtasks and validated on various datasets, including Objaverse, ShapeNet, GSO, and ABO." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Shape Restoration", "3D Reconstruction", "Diffusion Model" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/fd93e352bd0c62d72a4c07987b1d748a51f83de8.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "UniRestore3D: A Scalable Framework For General Shape Restoration" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xPTzjpIQNp
Optimal Transport for Time Series Imputation
main
Active
Time series;Imputation
applications to computer vision, audio, language, and other modalities
5;6;8
2;2;2
2;3;3
2;3;3
2;2;4
6.333333
2
2.666667
2.666667
2.666667
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- What if the time series components are observed at different temporal frequencies (say days vs. hours, or hours vs. mins)?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Very good empirical investigation.\n\n- Clear presentation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Providing methods for time series imputations while respecting the temporal dependence." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "NA" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "see above." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. An interesting and well-motivated design of the spectral-enhanced Wasserstein distance (WD)\n2. A theoretical justified design of proximal spectral WD to account for non-stationarity.\n3. seemingly excellent performance in real-world benchmark datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a time series imputation method based on optimal transport. The key idea is the combination of a frequency-based Wassertein discrepancy and selective matching regularization. Theoretical justification is also provided. The experimental results show the imputation accuracy outperforms many sota methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The biggest issue from my end is the lack of standard deviation. From Table 1, the error of the proposed method seems really good, but i am not informed if these results are averaged over multiple train/test runs or just one run. To avoid cherry picking, the authors are encouraged to highlight how these numbers were obtained, what the training/test splits were, and what hyperparameter selection/cross-validation process was involved, etc. Similar expectations apply to table 3 and 4. \n\n2. Lack of convergence discussion/analysis. From Fig. 3 and Section 3.4, the imputation procedure seems to repeatedly sample patches from the time series, compute PSW, and use the gradient of the PSW to update the imputation. There seems a lack of convergence guarantees or discussions about this procedure. Can authors provide at least some discussion on this? \n\n3. Data noise issue. Real time series data often includes noises. That means, just computing the distance after DFT in lin154 might be affected by data noises. Have the authors consider any methods or trade-offs, such as low-pass filters, in your SWD definition, to improve the robustness and/or counteract the noise effect?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- In 'Contributions', the authors mention that PSW-I eliminates the need for masking, but PSW-I seems to use masking (eg as shown in Fig3). What does this description mean?\n\n- What does Lemma 3.2 indicate? How do you know 'deviates more from the typical elements of \\beta'? Further, how do you know the PSW discrepancy avoid the problem indicated by this lemma? In Theorem C.2, the perturbation in PSW discrepancy is shown. Is it possible to compare this with Lemma 3.2? Even if it is possible, what does it mean? Is it clear how large (or small) perturbation caused by outliers ultimately affect the final imputation?\n\n- Although D_KL is used in (2), T 1_m and T^T 1_n is not a probability distribution (because the normalization (sum equals 1) constraint is removed). How are they normalized?\n\n- What is the definition of the 'DFT matrix' in the gradient? How does the gradient of (2) become \\Delta_alpha_i P and \\Delta_beta_j P? The second term in (2) disappear? The optimality condition wrt T is considered in this gradient calculation? \n\n- After (1), D is squared distance, while in definition 3.1 D is distance without square. Which is correct?\n\nMinor issues:\n\n- In the end of p3: Fig. 4 should be Fig 1? In Fig 1(a), the left is W^(F)?\n\n- The first word of Sec3.2: 'time-series' -> 'Time-series'" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Using OT to impute time-series data is an interesting approach.\n\n- Empirical evaluation shows high performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes an optimal transport (OT) based time-series imputation method. The authors claim that naive application of OT does not work for time-series data. The proposed method consider applying OT in the frequency domain of the original data, called pairwise spectrum distance (PSD). Further, to deal with multiple modes, proximal spectral Wasserstein (PSW) distance is also proposed, in which mass constraint is removed to make transportation more flexible." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Some technical justification is vague. Clearer descriptions would be desired.\n\n- Introduction is a bit too abstract about the proposed method. It describes what problem is solved in the paper, but does not describe the basic technical idea how it is achieved." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024optimal,\ntitle={Optimal Transport for Time Series Imputation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xPTzjpIQNp},\nnote={under review}\n}" }, "abstract": { "value": "Missing data imputation through distribution alignment has demonstrated advantages for non-temporal datasets but exhibits suboptimal performance in time-series applications. The primary obstacle is crafting a discrepancy measure that simultaneously (1) $\\textit{captures temporal pattern}$—accounting for patterns such as periodicities and temporal dependencies inherent in time-series—and (2) $\\textit{accommodates non-stationarity}$, ensuring robustness amidst multiple coexisting temporal patterns. In response to these challenges, we introduce the Proximal Spectrum Wasserstein (PSW) discrepancy based on the stochastic optimal transport framework, which incorporates a pairwise spectral distance to encapsulate temporal patterns, coupled with selective matching regularization to accommodate non-stationarity. Building upon PSW, we develop the PSW for Imputation (PSW-I) framework, which iteratively refines imputation results by minimizing the PSW discrepancy. Extensive experiments demonstrate that PSW-I effectively addresses these challenges and significantly outperforms prevailing time-series imputation methods." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Time series", "Imputation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a319116f28301972f4377abff3bfe5ab090b1201.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Optimal Transport for Time Series Imputation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xPxHQHDH2u
Reflective Gaussian Splatting
main
Active
Gaussain-Splatting;Physically based Rendering;Deferred-Rendering;Inter-Reflection
applications to computer vision, audio, language, and other modalities
5;5;6;6
5;4;4;4
3;2;3;3
2;2;3;3
3;3;4;4
5.5
4.25
2.75
2.5
3.5
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. What's the main difference between the \"material-aware normal propagation\" in the paper and the \"normal propagation\" in 3DGS-DR? According to 3DGS-DR, their normal propagation are also aware of reflection strength. Is there any key improvement over their solution? Otherwise, is it an adaptation from reflection-strength-aware to PBR-attribute-aware?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. By extracting explicit geometry, the paper addresses the inter-reflection issue in Gaussian splatting, which is important in realistic PBR. Experimental results showcase its SOTA performance in novel view synthesis and decomposition on reflective cases.\n2. Instead of per-Gaussian PBR (like Relightable 3DGS or Gaussian Shader), The proposed method employs an effective PBR deferred rendering to achieve better PBR performance (similar to 3DGS-DR). Further ablation study demonstrates the superiority of such a deferred rendering technique over the per-Gaussian solutions.\n3. The proposed method employs a material-aware normal propagation. This enhances normal estimation by periodically increasing the scale of 2D Gaussians with high metallic and low roughness, which demonstrates interesting material-normal interaction in 3DGS-based PBR." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Though powerful in novel view synthesis, vanilla 3DGS encounters challenges in extending to physically-based rendering or modeling inter-reflection due to lack of deterministic geometry. This work introduces a novel approach Ref-Gaussian, which achieves real-time high-quality rendering of reflective objects while also modeling inter-reflection effects.\n\nThe paper proposes several key techniques:\n(a) Geometry enhanced technique: employing 2DGS to bridge deterministic geometry with Gaussian splatting and enhancing the geometry by the novel material-aware normal propagation.\n(b) PBR optimization framework for 3DGS-based methods: using per-Gaussian PBR initialization followed with physically-based deferred rendering.\n(c) Gaussian-grounded inter-reflection: applying real-time ray-tracing to the extracted mesh from 2DGS.\n\nExtensive experiments demonstrate the effectiveness of these techniques and that the proposed method outperforms several baselines significantly." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The effectiveness of inter-reflection technique lacks further qualitative evidence (e.g. providing indirect components in Fig.5 or showcasing indirect components in Ref-Real dataset where the multiple objects provide rich inter-reflection), as the ablation study in table 3 indicates only a slight decrease in PSNR when rendering without inter-reflection.\n2. The ablation study in table 3 only takes PSNR changes into account, while the influences on geometry may need further demonstration (e.g. normal MAE or qualitative illustration), in order to provide stronger evidence for effectiveness of each techniques." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- In ablation study, the normal of 2DGS is much better than 3DGS. Since accurate normals are very important for PBR, does this mean that the performance gain of the proposed method over other methods comes largely from the better geometry reconstruction quality of 2DGS? This is important for evaluating the technical contribution of this paper. I hope the author can give quantitative data (**use 3DGS as representation and keep the rest of the pipeline unchanged to evaluate the rendered results**) to show how much performance improvement can be achieved by using 2DGS representation compared to 3DGS.\n- Since indirect lighting is modeled as an attribute of each Gaussian, which represents the inter-reflection under the lighting conditions corresponding to the training stage. Is it possible to build indirect lighting when relighting?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Rendering quality is excellent, and reconstructed normals are accurate.\n- Training time and rendering speed are satisfactory" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed a novel Gaussian Splatting based inverse rendering framework, which focus on reflective object reconstruction. This method rely on deferred shading to achieve more smooth and cohesive rendering results, as well as the combination of mesh-based visibility and per-Gaussian indirect lighting to model the inter-reflection. The experimenta evaluation demonstrate that this method can accurately reconstruct reflective object while maintaining real-time rendering capabilities." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The contribution of this paper may lack novelty. For the deferred shading part of the pipeline, I think many previous 3DGS-based inverse rendering methods[1][2][3] have adopted these techniques and [2][3] also use split-sum approximation to handle the intractable rendering equation. Further more, assigning each Gaussian with a new attribute to model the indirect lighting has also been proposed in previous methods[4]. The innovation of this method lies in the new visibility modeling scheme and optimization techniques. Unlike previous methods that use baked volume or Gaussian-based ray-tracing to model occlusion, the proposed method attempts to first extract the mesh using TSDF and then use mesh-based ray-tracing to obtain occlusion.\n\n- To determine the visibility, this method consider the occlusion at the reflected direction $\\boldsymbol{R}=2\\left(w_i \\cdot \\boldsymbol{N}\\right) \\boldsymbol{N}-w_i$. This means that this method only considers the indirect lighting of the specular surface. For glossy or diffuse surfaces, this estimation may not be not accurate enough, and such objects do exist in the dataset. For example, for the Potion in figure.5, The lid of the bottle is obviously a rough diffuse surface. Since the proposed method focuses on the reconstruction of the reflective object, this does not seem to be a serious problem, but I would like to know if there is a way to improve this.\n\n- In addition, since the visibility only considers the reflected direction, I am a little confused about the integral in equation 9. Because this method does not calculate the integral over the entire hemisphere $\\Omega$. So I want to know how the rendering equation is finally calculated.\n\n\n\n\n\n[1] DeferredGS: Decoupled and Editable Gaussian Splatting with Deferred Shading https://arxiv.org/abs/2404.09412 \n\n[2] GS-IR: 3D Gaussian Splatting for Inverse Rendering https://openaccess.thecvf.com/content/CVPR2024/papers/Liang_GS-IR_3D_Gaussian_Splatting_for_Inverse_Rendering_CVPR_2024_paper.pdf\n\n[3] 3DGaussian Splatting with Deferred Reflection https://dl.acm.org/doi/pdf/10.1145/3641519.3657456\n\n[4] Relightable 3D Gaussians: Realistic Point Cloud\n Relighting with BRDF Decomposition and Ray\n Tracing https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/06121.pdf" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. L239-244. More details are required to describe how the indirect lighting is rendered. To be specific,\n - Does the method conducts explicit ray tracing for the reflected ray and computes ray-Gaussian intersection? Or does the method uses Gaussian splatting in the reflected ray directions?\n - In Eq. (10), the meaning of the symbol $N$ in $i\\in N$ is unclear: does it denote the set of Gaussians intersected with the reflected ray?\n2. L254: \"To mitigate this, we apply the rendering equation at the Gaussian level to achieve geometry convergence initially.\" What does \"rendering equation at the Gaussian level\" mean? What is the difference between this and Eq. (8)?\n3. Are the training hyperparameters the same across different scenes?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The key designs proposed by the paper are rational. It is not surprising that leveraging physically-based rendering can significanty improve the performance of specular rendering.\n2. I like the idea of material-aware normal propagation. Seems like it can greatly improve the quality of surface normal reconstruction.\n3. Extensive qualitative and quantitative experiments demonstrate that the proposed method outperforms baselines.\n4. I appreciate the comprehensive ablation study that covers lots of specific design choices." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a Reflective Gaussian splatting (*Ref-Gaussian*) framework to achieve real-time high-level novel view synthesis for reflective objects with inter-reflection effects. The framework consists of 2 main parts: (i) physically-based deferred rendering, which associates each Gaussian with material properties and leverages physically-based surface rendering (rendering equation and split-sum approximation) to compute the rendered color; (ii) Gaussian-grounded inter-reflection, which computes the visibility of the reflected ray and model indirect view-dependent color as a per-Gaussian spherical harmonics. The proposed method also leverages several techniques to enhance the underlying geometry. This paper evaluates the proposed method with qualitative and quantitative experimental results and validates its design components by an ablation study. It shows that the method can outperform baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Missing details. L233-234: \"During optimization, we periodically extract the object’s surface mesh using truncated signed distance function (TSDF) fusion.\" The authors need to specify the number of steps as the period of mesh extraction.\n2. Unclear explanations. I have several confusions when reading the paper, and please see the \"Questions\" part. I believe that the paper writing can be improved to explain the method more clearly.\n3. Baselines. Although this method is based on 3DGS, it should also include more methods based on other representations (such as NeRF) as its baseline, such as (a) NeRO: Neural Geometry and BRDF Reconstruction of Reflective Objects from Multiview Images (SIGGRAPH 2023) (b) Neural Directional Encoding for Efficient and Accurate View-Dependent Appearance Modeling (CVPR 2024).\n4. Limitation of Gaussian-grounded inter-reflection: In Eq. (8), it is clear that the direct lighting part can capture rough specular effect. But in Eq. (10), the method only traces a single reflected ray to compute the indirect reflection, which will introduce errors for rough surfaces. This should be added as a limitation.\n5. Experiments. The method leverages the extracted mesh to compute visibility, so I think the author should also show (at least qualitative) results of the extracted mesh in the experiment section.\n\nI may consider increasing the rating if my concerns and questions are addressed in the rebuttal." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. From the paper, I assume that the geometry is important for disentangling appearance and those PBR materials. I would like to know if PBR modeling will also benefit geometry. \n\n2. Why is the LIPPS metric of the proposed method on the real dataset worse than 3DGS-DR?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The proposed method achieves state-of-the-art performance on several benchmark datasets, validating its significance. \n2. The proposed method offers a good trade-off between performance and efficiency. The introduced deferred shading technique, plus the occlusion approximation using a TSDF mesh, sounds reasonable for these goals.\n3. The paper is well-written and easy to follow. The paper has enough details for reproduction." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on novel view synthesis for reflective objects. It proposes a method called Reflective Gaussian Splatting, which uses Gaussian Splatting as its primary representation and introduces two additional techniques: (a) physically-based deferred shading and (b) an accurate approximation for modeling inter-reflection effects within the Gaussian Splatting framework. The effectiveness of this method is validated using various benchmark datasets. Additionally, the method is compute-efficient, making it suitable for applications such as relighting and material editing." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While efficient in modeling the occlusion with an extracted mesh, its performance may rely on the accuracy of the extracted mesh. Since extracting a good mesh for the reflective object is non-trivial, such a non-differentiable approximation may lead to worse results.\n\n2. While built upon 3DGS-DR, its performance (LPIPS) on real datasets appears limited over 3DGS-DR. \n\n\n#minors:\nThe citation in L49~L50 seems misplaced." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024reflective,\ntitle={Reflective Gaussian Splatting},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xPxHQHDH2u},\nnote={under review}\n}" }, "abstract": { "value": "Novel view synthesis has experienced significant advancements owing to increasingly capable NeRF- and 3DGS-based methods. However, reflective object reconstruction remains challenging, lacking a proper solution to achieve real-time, high-quality rendering while accommodating inter-reflection. To fill this gap, we introduce a Reflective Gaussian splatting (Ref-Gaussian) framework characterized with two components: (I) Physically based deferred rendering that empowers the rendering equation with pixel-level material properties via formulating split-sum approximation; (II) Gaussian-grounded inter-reflection that realizes the desired inter-reflection function within a Gaussian splatting paradigm for the first time. To enhance geometry modeling, we further introduce material-aware normal propagation and an initial per-Gaussian shading stage, along with 2D Gaussian primitives. Extensive experiments on standard datasets demonstrate that Ref-Gaussian surpasses existing approaches in terms of quantitative metrics, visual quality, and compute efficiency. Further, we illustrate that Ref-Gaussian supports more applications such as relighting and editing." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Gaussain-Splatting", "Physically based Rendering", "Deferred-Rendering", "Inter-Reflection" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/6233615dc98ba1e53cada2d6fbf54b06aaed1444.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/b62d9b7cf129ca3914fe5a9a1f878194101827c9.zip" }, "title": { "value": "Reflective Gaussian Splatting" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xQAhUIuAc6
Axis-level Reflectional Symmetry Detection with Group-Equivariant Representation
main
Active
Symmetry detection;Equivariant learning;Group equivariance
applications to computer vision, audio, language, and other modalities
5;5;6
2;5;5
3;3;3
3;1;3
3;2;2
5.333333
4
3
2.333333
2.333333
0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The authors are basing this on Cohen et al.’s work for equivariant cnns and I’m not sure how is this different? Those filters are already rotational equivariant based on the symmetry groups they represent. \n\n“Lenc & Vedaldi (2015) show that the AlexNet CNN (Krizhevsky et al., 2012) trained on imagenet spontaneously learns representations that are equivariant to flips, scaling and rotation.” from Group Equivariant Convolutional Network. Why would this approach be necessary for rotational symmetry invariance for reflection symmetry detection?\n\nHow large is the kernel size? If using too small a size, how can it be 8-fold symmetric?\n\nHow well does this objective function work on non-changed neural networks? What about modern networks like Convnext?\n\nLine 236: why D8 and not some other amount of rotations?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is using rotational equivariant networks to improve symmetry detection. It’s an interesting approach (though needs to be sufficiently distinguished from others in the field). \n\nThis paper is clearly written and I can follow the logic on what they are trying to do.\n\nI appreciate the approach with fibers and it seems interesting to not use a dense approach. I like the difference in the approach and would like to see more of that with different backbones since I think it would work even better." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a reflection symmetry detection system with matched, multiscale kernels rotationally equivariant network. The work uses equivalent networks to allow symmetries to be detected at rotations. The authors also use a fiber based approach to directly find the symmetry rather than first predicting a heatmap for the symmetry." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Major:\nFor group representation and a longer background of symmetry detection and needs to be cited here is Computational symmetry in computer vision and computer graphics by Yanxi Liu et al. 2010\n\nThe evaluation only compares against a recent method and doesn’t go back to any of the previous methods (check out Funk et al 2017 for a list of methods where most are freely available online). They are used in the papers the authors compared with: Seo, Ahyun, Woohyeon Shim, and Minsu Cho. \"Learning to discover reflection symmetry via polar matching convolution.\" Proceedings of the IEEE/CVF international conference on computer vision. 2021. In addition, why have you deviated from the standard precision recall with F1 marked curve like the previously mentioned papers? That is really useful to understand how your metric compares to others. \n\nThis paper (which cited thoughout the paper), also uses equivariant networks for both rotation and reflection symmetry detection. Seo 2021 and 2022 both use equivariant kernels with Seo 2022 uses rotation group equivalents. The main difference between the group equivalence of the other papers, at least that I understand, is that the group d8 is used rather than a special Euclidean symmetry group out of the 17. I'm only referring to the difference in the equivariant and not other differences in the approach. \n\n\nMissing citations\nGens, R. and Domingos, P. Deep Symmetry Networks. NeurIPs, 2014. proposed equivariant convolutional arch that needs to be cited and compared with. \nRotationally-invariant CCNs - Dieleman, Sander, Kyle W. Willett, and Joni Dambre. \"Rotation-invariant convolutional neural networks for galaxy morphology prediction.\" Monthly notices of the royal astronomical society 450, no. 2 (2015): 1441-1459.\n\nAuthors should mention that this is equivariant NN for just 2D data or other papers such as “Equivariant Multi-View Networks. Carlos Esteves et al. ICCV 2019” should be cited. \n\nA figure to help understand Sections 3 and 4 would be helpful for understanding what you are getting at here visually. There is a lot of text and equations and I think a figure to get at the expansion of fibers and how the author sare using symmetry groups would be a big help.\n\n\nMinor:\nFirst paragraph needs citations. You can’t just state facts in a paper without a citation on symmetry being a fundamental concept. You can go back to Gestalt theory or how symmetry detection is prevalent in the animal kingdom but cite it. \n\nIn Figure 1, (a) (b)... needs to be labeled in the image. This is hard to follow" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* The implementation details for orientational anchors could be expanded to clarify their integration within the broader architecture and their impact on computational efficiency.\n* The model’s applicability to continuous symmetries, such as ellipses or curved patterns, is limited, which may constrain its use in certain symmetry-dense applications.\n* The paper lacks evaluation of the method's generalization performance on different datasets, which could limit its applicability to other scenarios." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The paper introduces an innovative axis-level reflectional symmetry detection method based on dihedral group-equivariant representations.\n* The proposed orientational anchor expansion and reflectional matching modules effectively enhance the model's detection capabilities across various orientations and scales.\n* The method demonstrates strong robustness and generalization in complex real-world scenarios.\n* The paper provides clear explanations of complex concepts and methodologies, aiding reader comprehension." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a novel axis-level reflectional symmetry detection network that leverages dihedral group-equivariant representations to improve the detection of symmetry axes in images. The authors introduce an orientational anchor expansion method for fine-grained, rotation-equivariant analysis across multiple orientations, enhancing the model's ability to detect diverse symmetry patterns. They also develop a reflectional matching module using multi-scale kernels to capture reflectional correlations across different receptive fields, improving robustness. Extensive experiments demonstrate that the proposed method outperforms existing pixel-level approaches in challenging scenarios, establishing a new benchmark in reflectional symmetry detection. The work offers a fresh perspective and significant contributions to the field of symmetry detection." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The implementation details for orientational anchors could be expanded to clarify their integration within the broader architecture and their impact on computational efficiency.\n* While multi-scale reflectional matching is beneficial, further analysis on the trade-off between accuracy and computational overhead would improve the study.\n* The model’s applicability to continuous symmetries, such as ellipses or curved patterns, is limited, which may constrain its use in certain symmetry-dense applications.\n* The dependency on pre-defined kernels in multi-scale matching might limit adaptability to unknown scales or orientations in real-time applications.\n* The paper lacks evaluation of the method's generalization performance on different datasets, which could limit its applicability to other scenarios." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.\tIn Line 521, you claim that “In Fig. 4, F1-scores for all three methods are plotted across different distance thresholds,” but Fig. 4 only shows two methods, missing the presentation of PMCNet.\n\n2.\tThe proposed method adapts a line detection network and applies it to the reflectional symmetry detection task. Can the adaptation strategy be effective on other line detection networks?\n\n3.\tCould you provide comparison results with existing methods on other datasets (such as SDRW[1] and LDRS[2]) to fully demonstrate the superiority of the proposed method? \n\n4.\tWhat are the application scenarios, research value, and significance of this study?\n\n[1] Liu, Jingchen, et al. \"Symmetry detection from realworld images competition 2013: Summary and results.\"\n[2] Seo, Ahyun, Woohyeon Shim, and Minsu Cho. \"Learning to discover reflection symmetry via polar matching convolution.\"\nFlag For Ethics Review: No ethics review needed." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThe idea is interesting. Compared to existing methods, which primarily treat reflectional symmetry detection as a pixel-level heatmap prediction problem, this paper classifies the presence of a mid-point of a reflectional symmetry axis for each pixel position and also regress the angle and length of the axis , directly performing axis-level prediction. \n\n2.\tExtensive experiments validate the effectiveness of the proposed method, providing more accurate axis-level predictions than existing pixel-level methods.\n\n3.\tThe paper is well-organized, making it easy and quick to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a group-equivariant neural network for axis-level reflectional symmetry detection。The authors introduce orientational anchor expansion for fine-grained rotational equivariant analysis of different symmetry patterns across multiple orientations. Additionally, the paper develops reflectional matching with multi-scale kernels, enabling robust symmetry detection across various receptive fields. Experimental results demonstrate the effectiveness of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThe literature review is incomplete. The cited references are almost entirely from 2022 and earlier (with only one paper from 2023 and none from 2024), raising questions about the novelty of the work.\n\n2.\tThe proposed multi-scale expansion has already been widely explored and proven effective in tasks such as object detection and segmentation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024axislevel,\ntitle={Axis-level Reflectional Symmetry Detection with Group-Equivariant Representation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xQAhUIuAc6},\nnote={under review}\n}" }, "abstract": { "value": "Reflectional symmetry detection remains a challenging task in machine perception, particularly in complex real-world scenarios involving noise, occlusions, and distortions. We introduce a novel equivariant approach to axis-level reflectional symmetry detection that effectively leverages dihedral group-equivariant representation to detect symmetry axes as line segments. We propose orientational anchor expansion for fine-grained rotation-equivariant analysis of diverse symmetry patterns across multiple orientations. Additionally, we develop reflectional matching with multi-scale kernels to extract effective cues of reflectional correlations, allowing for robust symmetry detection across different receptive fields. Our approach unifies axis-level detection with reflectional matching while preserving dihedral group equivariance throughout the process. Extensive experiments demonstrate the efficacy of our method while providing more accurate axis-level predictions than existing pixel-level methods in challenging scenarios." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Symmetry detection", "Equivariant learning", "Group equivariance" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1fd987f65594e206301fc2d5e0153af2b59017d7.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Axis-level Reflectional Symmetry Detection with Group-Equivariant Representation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xQBRrtQM8u
Adjoint Matching: Fine-tuning Flow and Diffusion Generative Models with Memoryless Stochastic Optimal Control
main
Active
Reward fine-tuning;stochastic optimal control;flow matching;diffusion models;RLHF;adjoint method
generative models
6;6;6;8
4;3;4;4
4;4;3;4
3;3;3;4
3;3;3;3
6.5
3.75
3.75
3.25
3
0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "What is the complexity/computation cost of computing/sampling adjoints in Equation (26) and (27)? \n\nIn addition, for Diffusion-DPO, how the preference pairs are sampled for further fine-tuning? Can authors explain why Diffusion-DPO tuned models yield a decrease in performance?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "This paper is theorectically well written, with detailed introduction to the literature, explanation of motivations, several interesting theorems and propositions, and provide theorectically-driven approaches for diffusion models fine-tuning. Diffusion Models fine-tuning or RLHF for diffusion models are an important direction which already contributes to improving the performance to SOTA diffusion models. This paper indeed provides a novel model-based SOC method for diffusion models alignment, which is theorectically sound and also yields good performance for tuning Flow Matching based models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper provides theorectical insights on why optimizing a KL-regularized reward objective (which is popular and dominant in RLHF for LLM) could lead to a bias in the optimal solution for diffusion models, and how to address this issue by a proper choice of noise schedule. The paper also provides solutions on how to solve the stochastic optimal control problem by adjoint methods, and provide a more efficient alternative than the classical methods and prove its equivalence. Empirical examples are further provided to show the algorithm effectiveness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weakness is that the paper can be benefited from more comparisons with baseline methods empirically, specifically, there lacks a baseline in experiments in directly optimizing objective (17) (though theorectically there is value bias as shown in the paper), using stochastic control methods like adjoint matching proposed in this paper. There is comparison for this in the synthetic examples in Figure 2, but more practical downstream tasks evaluations are needed to show that the noise schedule proposed in this paper can indeed yield better performance. \n\n**Minors**:\n\n1)On line 229-230, the expression \"Dividing (14) by (15)\" is odd as (14) is not an equality, \"Plug in normalization constant (15) to (14)\" might be better.\n\n2)The adjoint method in Section 4 needs more introduction or discussion, or an earlier pointer to the part in appendix, including more clarification on what is adjoint on line 350-352, what is this loss on Equation (21); The reference in Appendix should be referred earlier instead of being deferred to until line 422.\n\n3)In proposition 2, it is better to first define earlier of the adjoint matching objective instead of combining the definition with the proposition." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "see above." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper provides a theoretically sound framework for the reward fine-tuning problem, viewing it as a stochastic optimal control problem. The observation of the value function bias problem in previous approaches and proposal of using “memoryless” noise schedules are based on this view.\n\nThe proposed Adjoint Matching algorithm for SOC, casting it as a least-squares regression problem, is novel and effective.\n\nThis paper is well-written, clearly structured and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies reward-based fine-tuning for diffusion models. The authors frame the reward fine-tuning problem as stochastic optimal control, and point out an “initial value function bias” problem that exists in previous RLHF fine-tuning approaches. The authors propose using a memoryless noise schedule for fine-tuning in order to turn the learned distribution into the desired reward-tilted distribution without bias. Furthermore, a novel algorithm named adjoint matching is proposed to solve the stochastic optimal control problem. Experimental results show that fine-tuning a flow matching base model with adjoint matching outperforms baselines such as DRaFT, ReFL, and DPO." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main paper presented experimental results on fine-tuning a Flow Matching model, and provided pseudo-code for fine-tuning denoising diffusion models; it would be more convincing if results on denoising diffusion models are provided.\n\nThe experiments with classifier-free guidance do not seem comprehensive. It would be better if there are similar quantitative comparisons with other baselines than selected DRaFT-1." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Did the authors compare the proposed approach to the work by Uehara et. al. [1]?\n- Did the authors test how adjoint matching works with only a few diffusion steps?\n- How did the authors select the time points to discretize? Uniform in the interval [0,1]?\n- Do the authors have an intuition as to why the lean adjoint matching objective outperforms the discrete/continuous adjoint matching objective? I read the explanation that the continuous adjoint needs clipping and is unstable, but are there other reasons?\n- Are the authors planning to make the code public?\n\n---\n\n- [1] Uehara et al. (2024). Fine-Tuning of Continuous-Time Diffusion Models as Entropy-Regularized Control. *arXiv preprint arXiv:2402.15194*\n- [2] Vargas, F., Grathwohl, W., & Doucet, A. (2023). Denoising diffusion samplers.  *ICLR 2023*. 2024.\n- [3] Zhang, Q., & Chen, Y. (2021). Path integral sampler: a stochastic control approach for sampling. *arXiv preprint arXiv:2111.15141*\n- [4] Nusken, Nikolas, et al. \"Transport meets variational inference: Controlled Monte Carlo diffusions.\" *The Twelfth International Conference on Learning Representations: ICLR 2024*. 2024.\n- [5] Richter, L., & Berner, J. (2023). Improved sampling via learned diffusions. *ICLR 2024*. 2024." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- The paper is well-written and structured.\n- Strengths of memory-less noise schedule:\n - The proposed approach is a less complex and provides an arguably more elegant solution to the initial value function bias compared to the work by Uehara et. al. [1].\n - The proposed approach is compatible with both DDPM and Flow/Bridge-Matching models.\n- Strengths of (lean) adjoint matching:\n - Simple regression-based objective with circumvents memory problems associated with the discrete adjoint method\n - Simulating the adjoint ODE does not require control evaluations, making it more scalable than the continuous adjoint method.\n - Compatible with general SOC problems.\n- Numerical evaluations show that the proposed approach outperforms competing methods on a variety of evaluation criteria." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to fine-tune a pre-trained diffusion model. The contribution of this paper is two-fold:\n\nFirst, the paper explains the problem of overoptimization when fine-tuning diffusion models based on a learned reward function. To avoid the initial value function bias problem, the paper proposes using a memoryless noise schedule, which is defined as a noise schedule that ensures that the joint distribution over the initial and final states is independent.\n\nSecond, the paper introduces (lean) adjoint matching for stochastic optimal control methods which resolves memory constraints of the discrete adjoint method and makes the continuous adjoint method more scalable by making the simulation of the adjoint ODE cheaper to compute." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Weaknesses of the memory-less noise schedule:\n - The statement “however, there does not yet exist a simple approach which actually provably generates from the tilted distribution” is, to the best of my knowledge not true: Take for example the approach in [2, 3] and use the base measure as reference process. Then, we have the terminal cost\n \n $$\n g(X_1) =p^{\\text{base}}(X_1)/p_{\\text{target}}(X_1) = p^{\\text{base}}(X_1)/(p^{\\text{base}}(X_1)\\exp(r(X_1)) = 1/\\exp(r(X_1)) \n $$\n \n which es ensures that \n \n $$\n p_{\\text{target}}(X_1) = \\exp(r(X_1)).\n $$\n \n The ‘memoryless’ property thus eludes to the fact, that we need a time-reversal of the base process. A more general discussion compared to [2, 3] can be found in [4, 5].\n \n- Weaknesses of (lean) adjoint matching:\n - It is not clear (at least to me) which objective function lean adjoint matching is optimizing. \n - Lean/Continous Adjoint-matching requires simulating another differential equation which may result in computational overhead.\n- Code is not public" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "The concept of using stochastic optimal control (SOC) to fine-tune diffusion-based generative models is not new, as [1] formulates fine-tuning for diffusion models (specifically forward-reverse models) through Doob's h-transform, which involves optimally controlled diffusion processes. The novelty lies in:\n\n* Reformulating fine-tuning of flow-matching based generative models as an SOC problem, the authors introduce a suitable cost functional for the control policy parameterized by a training neural network. Additionally, they incorporate a memoryless noise schedule to ensure factorizability, thereby eliminating inherent bias.\n* Proposing an adjoint-matching objective to solve the above SOC problem. It is well known that solving SOC problems is computationally challenging due to the need for continuous gradient graph caching. Extending the adjoint methods commonly used in dynamical learning as a dynamic solver for SOC objectives is, in my view, a significant contribution.\n\n\n```\n[1] Denker et al., Efficient Finetuning of Conditional Diffusion Models by Learning the Generalised h-transform.\n```" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a SOC formulation for fine-tuning flow-based generative models, demonstrating that naive approaches can introduce inherent bias. They introduce a memoryless noise schedule to ensure convergence to the tilted distribution and present the Adjoint Matching objective as a scalable training objective for SOC problems. Their comparisons show improved generalization, consistency, and diversity in the fine-tuned generative models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* While I may have missed it in the appendix, it appears that the experiments in this paper primarily address the quality of fine-tuning, without providing quantitative results on aspects of the proposed adjoint objective, such as convergence plots or memory usage statistics comparing the adjoint matching loss to traditional SOC objectives, as shown in studies like [2, 3]. From a theoretical standpoint, although the author suggests that the approach could be sufficiently applied, it would strengthen the claim to include experiments validating the numerical effectiveness of the new SOC objective.\n\n\n```\n[2] Nüsken and Ritcher, Solving high-dimensional Hamilton-Jacobi-Bellman PDEs using neural networks: perspectives from the theory of controlled diffusions and measures on path space.\n[3] Domingo-Enrich et al., Stochastic Optimal Control Matching.\n```" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce a reward fine-tuning framework for diffusion and flow matching models, based on stochastic optimal control (SOC), and Adjoint Matching, a new SOC algorithm." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024adjoint,\ntitle={Adjoint Matching: Fine-tuning Flow and Diffusion Generative Models with Memoryless Stochastic Optimal Control},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xQBRrtQM8u},\nnote={under review}\n}" }, "abstract": { "value": "Dynamical generative models that produce samples through an iterative process, such as Flow Matching and denoising diffusion models, have seen widespread use, but there have not been many theoretically-sound methods for improving these models with reward fine-tuning. In this work, we cast reward fine-tuning as stochastic optimal control (SOC). Critically, we prove that a very specific *memoryless* noise schedule must be enforced during fine-tuning, in order to account for the dependency between the noise variable and the generated samples. We also propose a new algorithm named *Adjoint Matching* which outperforms existing SOC algorithms, by casting SOC problems as a regression problem. We find that our approach significantly improves over existing methods for reward fine-tuning, achieving better consistency, realism, and generalization to unseen human preference reward models, while retaining sample diversity." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Reward fine-tuning", "stochastic optimal control", "flow matching", "diffusion models", "RLHF", "adjoint method" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/01bb70af1d5c59774841830907055f097dcc5f1a.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Adjoint Matching: Fine-tuning Flow and Diffusion Generative Models with Memoryless Stochastic Optimal Control" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xQCXInDq0m
CoS: Enhancing Personalization and Mitigating Bias with Context Steering
main
Active
personalization;context;large language model;inference;controllable generation
foundation or frontier models, including LLMs
6;6;8
4;4;3
3;3;4
3;3;3
2;3;4
6.666667
3.666667
3.333333
3
3
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "-How can CoS be extended to handle multiple contexts with varying levels of influence? How would the method resolve potential conflicts between different contexts?\n\n-Can you provide a more comprehensive analysis of the computational complexity of CoS? How does the computational cost vary with different parameters and input characteristics (input length, context size, and lambda values)? \n\n-How does the CoS approach affect the LLM's ability for other tasks, e.g., reasoning and creativity? It might be worth some discussion here.\n\n-The appendix seems missing from the manuscript. Is it accidentally omitted?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "-The paper is well written and easy to read.\n\n-The proposed approach to achieve personalization is simple, novel, and training-free, applicable to various LLMs. \n\n-Extensive experiments demonstrate strong performance in personalized recommendations, identification of implicit intents and quantification of extent of “personalization”. \n\n-The experimental analysis is comprehensive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Context Steering (CoS), a method for controlling the influence of context in Large Language Model generated text. The key idea behind CoS is to quantify the impact of context by comparing the output probabilities of the LLM with and without the given context. This key parameter lambda allows CoS to adjust the level of contextual influence on the generated text. \n\nThe paper demonstrates the effectiveness of CoS in various applications. One application is generating personalized recommendations, where CoS can tailor the LLM's output to specific user preferences. Another application is inferring relationships between open-ended texts, which can be used for tasks like classification and quantification of implied statements." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-Focused primarily on a single context. The paper primarily focuses on scenarios with a single, dominant context. However, real-world situations often involve multiple, potentially conflicting contexts. For example, in the movie case, the user might be interested in comedy movies, science fiction but also movies with great storytelling. \n\n-Limited Discussion on Computational Complexity: While the authors mention that CoS requires twice the amount of compute compared to a vanilla forward pass, they do not provide a detailed analysis of its computational complexity. A more in-depth analysis of how the computational cost scales with input length, context size, and lambda values would be beneficial. \n\n-Limited discussion on the impact of CoS on other tasks such as reasoning and creativity." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. CoS is a simple method of personalizing LLM outputs to context, without requiring fine-tuning, or prompt tuning. The method saves on the cost and effort needed for training or prompt-tuning, while being effective in the tests carried out by the authors.\n2. The framework can be used directly across many personalization contexts. Fine-tuning or prompt-tuning would require re-tuning for each new context.\n3. The experiments show promise, and include human evaluations, GPT4 evaluations, and comparisons with baseline models, across various personalization contexts and implicit hate settings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Context Steering (CoS), a method to personalize large language model (LLM) outputs at inference time. This is done by providing the user's characteristics and preference as context, and adjusting the influence of provided context using a contextual influence function. The influence of this function on the token probabilities can be adjusted, to control how personalized the output is to the given context. Applications of CoS include personalized recommendations involving topics such as movies, travel, cooking, etc. Besides this, the paper also introduces a Bayesian inference model by inverting the CoS probability model. This is used for classifying and quantifying implicit hate speech. Further applications of this Bayesian model include identifying tones in open-ended statements and online content moderation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Limited contexts: While CoS is effective for single, straightforward contexts (e.g., \"I like {genre}\"), user preferences are often more complex, involving various (possibly conflicting) likes and dislikes. It would be interesting to see the method's performance under more sophisticated and detailed contexts.\n2. The baseline experiments in Figure 4 are unclear to me. How are various values of lambda used in the case of in-context learning and multi-turn QA? Also, could the supposedly worse performance of ICL be fixed via prompt tuning?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The main concerns have been outlined under Weaknesses. Below are some additional questions:\n\nQ1. When adding the difference to the original model's predictions, how do you ensure that the generated results remain fluent, coherent, and meaningful?\n\nQ2. Could you provide an example illustrating how to compute lambda and the degree of hate using equations (4) and (5)?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "S1. Controlling the level of personalization by using the difference between LLM outputs with and without personalized context appears reasonable and straightforward, with the entire process completed at inference time.\nS2. The approach of inferring implicit intents from a given generation result is interesting.\nS3. A variety of experiments are presented." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces the CoS method for controlling the personalization of LLM generation results during inference. CoS operates by calculating the difference between LLM outputs with and without personalized context, and subsequently incorporating this difference into the original outputs with a weight parameter, lambda, to adjust the level of personalization. A higher lambda corresponds to a greater degree of personalization. The core idea shares similarities with existing counterfactual methods; however, applying it to control personalization is novel. Besides proposing CoS, the paper presents a method for inferring lambda in reverse from a given generation result, aiding in the identification of implicit intents, such as the 'degree of hate in statements.'" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1. The experimental evaluation appears insufficiently convincing. It would be beneficial to include more evaluations with objective metrics. For instance, incorporating experiments conducted on established benchmarks for LLM personalization [1] and recommendation [2] would strengthen the analysis.\n\nW2. Some experiments and their results are difficult to follow, such as those related to movie recommendations and hate identification. In the recommendation experiments, it is unclear how the baselines—multi-turn Q&A and in-context learning—are compared under different lambda values. Moreover, the results indicate a higher win rate for these baselines. How do these outcomes demonstrate the proposed method's advantages? For the hate identification experiments, the results are not presented in a clear manner.\n\nW3. The method's effectiveness seems dependent on the LLM’s existing ability to generate personalized responses for a given context. This suggests that the approach amplifies current personalization rather than fundamentally enhancing it. For example, if an LLM's personalization is flawed, the method cannot correct it. This limitation indicates that the approach may not serve as a replacement for traditional tuning-based methods.\n\nW4. The advantages of this method over prompt-based approaches (e.g., the multi-turn Q&A baseline) or in-context learning are not clearly outlined.\n\nW5. Table 2 does not include results for lambda=0. Providing these results would offer a more comprehensive view of the evaluation.\n\n[1] LaMP: When Large Language Models Meet Personalization.\n[2] BARS: Towards Open Benchmarking for Recommender Systems." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose Context Steering (CoS), an inference-time technique that enables generating outputs more relevant to the user-provided contexts, leading to better personalization and various applications for large language models." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024cos,\ntitle={CoS: Enhancing Personalization and Mitigating Bias with Context Steering},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xQCXInDq0m},\nnote={under review}\n}" }, "abstract": { "value": "To deliver high-quality, personalized responses, large language models (LLMs) must effectively incorporate \\textit{context} — personal, demographic, and cultural information specific to an end-user. For example, asking the model to explain Newton's second law with the context \\textit{``I am a toddler''} should produce a response different from when the context is \\textit{``I am a physics professor''}. However, leveraging the context in practice is a nuanced and challenging task, and is often dependent on the specific situation or user base. The model must strike a balance between providing specific, personalized responses and maintaining general applicability. Current solutions, such as prompt-engineering and fine-tuning require collection of contextually appropriate responses as examples, making them time-consuming and less flexible to use across different contexts. In this work, we introduce \\textbf{Context Steering (CoS)} —a simple, training-free decoding approach that amplifies the influence of the \\textit{context} in next token predictions. CoS computes contextual influence by comparing the output probabilities from two LLM forward passes: one that includes the context and one that does not. By linearly scaling the contextual influence, CoS allows practitioners to flexibly control the degree of personalization for different use cases. We show that CoS can be applied to autoregressive LLMs, and demonstrates strong performance in personalized recommendations. Additionally, we show that CoS can function as a Bayesian Generative model to infer and quantify correlations between open-ended texts, broadening its potential applications." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "personalization", "context", "large language model", "inference", "controllable generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/36228836d6e6182366093ac05a20abe8f2fe682e.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "CoS: Enhancing Personalization and Mitigating Bias with Context Steering" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xQIJ5fjc7q
DAG-Jailbreak: Enhancing Black-box Jailbreak Attacks and Defenses through DAG Dependency Analysis
main
Active
Jailbreak Attacks and Defenses;LLM Security;DAG Dependency Analysis
alignment, fairness, safety, privacy, and societal considerations
5;5;5;6
2;4;3;3
3;3;2;3
3;2;1;3
1;2;3;3
5.25
3
2.75
2.25
2.25
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- In the design of defense, it assumes the access to a justification whether the prompt is malicious or benign. When the prompt is judged as malicious, why does it still need to be served through a set of defense components instead of simply a refusal answer? \n- The evaluation metrics are confusing. In particular, three metrics, JR, HR and AR are evaluated, but 'JR and AR as the main\ncriterion' for attack and defense evaluation, then what is the point of proposing and evaluating HR metric? By intuition, I think HR cannot be regarded as successful jailbroken, but close to an aligned state that did not provide harmful information. Therefore at least on the evaluation on jailbreak defense, HR+AR is a more reasonable metric than AR." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- This work proposed the use DAG dependency analysis to facilitate the enhanced design of jailbreak attack/defense/evaluation, which gives a broader view of the field to study\n- A comprehensive study of black-box attacks including both mutation and adversarial generation-based algorithms is provided, and the DAG-attack designs show promising improvement toward individual attack methods on most models.\n- A comprehensive study of black-box defense is conducted based on the mixture-of-defense mechanism, that assigns different defense methods to specialized defense. The results show good defense performances and generalizability of the design.\n- A often overlooked metric, namely Jailbreak Hallucination is proposed in the evaluation, which is further refine the evaluation study of jailbreak." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces DAG-Jailbreak, a framework improving black-box jailbreak attacks and defenses for LLMs. DAG-Jailbreak leverages Directed Acyclic Graph (DAG) dependency analysis to create three key components: DAG-Attack, which builds effective attack strategies, DAG-Defense, a novel defense mechanism using a Mixture-of-Defenders; and DAG-Evaluation, which incorporates Jailbreak Hallucination and a two-stage assessment framework for evaluating LLM outputs. Experiments demonstrate that this method enhances attack effectiveness, improves defense generalizability, and refines evaluation practices." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Though the DAG analysis covers a diverse set of representative methods, it highly depends on the human efforts to conduct the analysis. The ensemble of different methods for each group (e.g., DAG-Attack-Gen) did not employ any automatic pipeline to improve each decomposed component/verify the improvement of each replacement. For example, DAG-Attack-Mut got even worse JR on GPT-4 and Claude models, which indicates that the design did not get the expected improvement from the analysis.\n- As randomness is introduced by setup 'temperature to 0.7, top-p to 0.7, and top-k to 50', the reproductivity is not under strong control. And given the limited number of evaluated data samples, it lacks statistical analysis such as a confidence interval to show the improvement is not marginal and random. \n- As a defense design, utility preservation is not even considered for the evaluation of DAG-Defense. High defense effectiveness may sacrifice the general utility, especially when this is an ensemble design. Also, the cost of defense is not discussed and evaluated. When multiple defense are stacked together, the cost the deploying the design will also increase significantly." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "Not needed" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The Prompt generation process is not clearly Explained." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is very detailed in experimentation. For showing the effectiveness of the attacks and defense.\n2. The approach is tested on both open-source and Closed-source LLM." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents Directed acyclic graphs (DAG) approach to interact with large language models (LLMs) and elicit a jailbreak condition This paper introduces DAG frameworks. This paper introduces DAG-Jailbreak, a novel framework leveraging Directed Acyclic Graph (DAG) dependency analysis to construct three comprehensive, automated, and logical frameworks for robust jailbreak attack, defense, and evaluation methodologies." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Algorithmic Detail: Key steps, like mutation, selection, and adversarial generation, are described conceptually but lack specific algorithms or pseudocode, making it hard to replicate exactly.\n\nDependency Management: The DAG structure’s dependencies between attack components are not clearly defined, leaving ambiguity in handling conflicts or prioritizing nodes.\n\nHyperparameters and Configuration: Parameters for mutation rates, prompt scoring, and component-specific configurations are not provided, which would require experimental tuning.\n\nEvaluation and Termination Criteria: The paper lacks clear metrics for attack success and stopping conditions, which are crucial for implementing the iterative process efficiently.\n\nLack of Algorithmic Details\nMutation and Selection Mechanisms: While the paper outlines stages like seed initialization, selection, and mutation for DAG-Attack-Mut, it does not provide the actual algorithms or pseudocode. For instance, it mentions using techniques like AutoDAN-GA and GPTFuzzer but lacks specifics on parameter settings, mutation strategies, or how to handle dependencies between mutations.\nAdversarial Generation Process: The DAG-Attack-Gen process is also described in broad strokes, such as using an adversarial LLM for red-teaming, but it doesn’t specify how the adversarial model should be trained or configured to generate optimized jailbreak prompts. Key details on prompt design, feedback loops, or specific evaluation metrics are missing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- Can you algorithmically describe the process used to determine the dependencies shown in Figure 1 and Figure 2?\n- Can you algorithmically describe the technique composition process used to produce the attacks and defenses presented in the evaluation from the dependency graphs in Figure 1 and Figure 2?\n- Are DAGs used in DAG-Evaluation? If not, the name may be misleading.\n\nMinor Writing Recommendations:\n\nAs may be clear from the rest of the review, my complaints are largely centered on the organization and presentation of ideas. Towards improving the organization of the paper, it may be best to present DAG-Attack and DAG-Defense separately, each with their own background and evaluation. This organization choice would reduce the mental load on the reader. \n\nFurther, there are several references to global/local optimization without formalizing the optimization target or clarifying what is meant by global and local. The abstract and introduction both refer to \"global algorithms\", which are not concretely introduced until section 3.2 and 3.3, which are halfway through the paper. By introducing the specific global algorithms used to systematize the current literature earlier, design insights of DAG-Jailbreak could be made clearer while also allowing the paper to be better understood in a single pass. Formalizing the inputs and outputs of each stage of the attack and defense algorithm decomposition may help future LLM jailbreaking works consistently apply your framework.\n\nThe paper demonstrates state-of-the-art results in an important problem domain and is well-situated in the literature, but does not provide enough information to reproduce the work." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Important problem domain.\n- Strong and comprehensive empirical evaluation results.\n- Highlights commonalities among LLM jailbreaking works." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes DAG-Jailbreak, a framework for combining existing LLM jailbreaking attack and defense methods to achieve stronger overall performance. Both the attack and defense methods are empirically evaluated under LLM-as-Judge with LLaMa-3 as the judge. The attack method is shown to be significantly more effective than several recent baselines from the literature across a wide range of LLMs. The defense method is shown to reduce the jailbreak hallucination rate (responding to the jailbreak prompt with non-harmful content), improving the correct refusal rate, although sometimes at the cost of increasing the jailbreak success rate." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Despite the impressive evaluation results, this paper fails to sufficiently communicate its core contribution: a general method for integrating multiple LLM jailbreaking attack and defense techniques. There are two critical gaps in understanding that would prevent me from reproducing this paper's work and consequently prevent me from accepting it:\n\n1. The dependency analysis is not formalized, despite claims that it can be automated. The process of creating a dependency graph from a set of independently developed jailbreaking techniques is not trivial and is not effectively described by the paper. Without further explanation of the dependency analysis process, I would be unable to reproduce the dependency graphs presented in Figure 1 and Figure 2.\n\n2. The process of creating a combined attack or defense method from a dependency graph is unclear and represents a nontrivial engineering challenge when independently developed techniques do not adhere to any set interface. Without a detailed discussion of how the dependency graph is applied to generically compose existing techniques, I would be unable to reproduce the attacks and defenses presented, even with the dependency graphs from Figure 1 and Figure 2." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In DAG-Evaluation, how are Keywords Matching, Binary Classifier, and LLM-as-a-Judge fairly compared, especially in handling Jailbreak Hallucination, and what are their key differences?\n\n2. Given the complexity of manual DAG dependency analysis, how can global optimality of attack combinations be ensured, and is there proof of this?\n\n3. Could you provide details on the computational overhead?\n\n4. Could you explain more about the practical application of the DAG-Jailbreak framework in real-world scenarios?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper presents a comprehensive DAG-Jailbreak framework that includes attack strategies (DAG-Attack), defense mechanisms (DAG-Defense), and evaluation methods (DAG-Evaluation).\n2. It provides thorough experimental validation across various LLMs, such as GPT-3.5, GPT-4, and LLaMa-2, showing significant improvements in attack success and defense effectiveness over baseline methods.\n3. The framework's adaptability and ability to integrate new methods make it highly scalable." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents the DAG-Jailbreak framework, which employs Directed Acyclic Graph (DAG) analysis to enhance both jailbreak attacks and defenses for LLMs. The framework consists of three key components: DAG-Attack, which optimizes attack strategies using mutation and adversarial generation methods; DAG-Defense, which introduces a Mixture-of-Defenders approach to improve the generalizability and effectiveness of defenses; and DAG-Evaluation, which assesses the success of attacks and defenses, incorporating the concept of Jailbreak Hallucination to identify irrelevant responses. Experimental results demonstrate the framework's ability to significantly improve both attack success rates and defense robustness across multiple LLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The DAG formulas lack detailed explanations, particularly regarding how they ensure global optimization and eliminate redundant paths.\n\n2. Experimental parameters like temperature, top-k, and top-p are minimally described. More detail would improve the transparency and reproducibility of the experiments.\n\n3. The concept of Jailbreak Hallucination needs a clearer distinction from typical LLM hallucinations. Further clarification would enhance understanding.\n\n4. Minor grammatical issues, such as \"semantical similarity\" instead of \"semantic similarity,\" slightly affect the paper's polish." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose DAG-Jailbreak, a novel framework leveraging Directed Acyclic Graph dependency analysis to construct more robust jailbreak attacks and defenses." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024dagjailbreak,\ntitle={{DAG}-Jailbreak: Enhancing Black-box Jailbreak Attacks and Defenses through {DAG} Dependency Analysis},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xQIJ5fjc7q},\nnote={under review}\n}" }, "abstract": { "value": "Black-box jailbreak attacks and defenses, a critical branch of the large language model (LLM) security, are characterized by their minimal requirement for user expertise and high potential for automation. However, current black-box jailbreak approaches often adhere to a uniform global algorithmic framework, leading to suboptimal solutions due to challenges in local optimization. This limits both their effectiveness and scalability. To address these limitations, we propose **DAG-Jailbreak**, a novel framework leveraging Directed Acyclic Graph (DAG) dependency analysis to construct more robust jailbreak attacks and defenses. The core idea behind this framework is to combine optimal sub-components to form a more effective global algorithm. **DAG-Jailbreak** compromises three components: *DAG-Attack*, which creates highly effective attackers based on two global algorithms and is capable of compromising well-aligned LLMs without prior knowledge; *DAG-Defense*, which introduces a novel global framework based on a mixture-of-defenders mechanism, significantly enhancing the scalability and effectiveness of jailbreak defenses by reducing the attack success rate to below 3\\% in most cases; and *DAG-Evaluation*, which introduces the concept of jailbreak hallucination and a two-stage evaluation framework to assess the outputs generated by LLMs comprehensively. Extensive experiments validate the superiority and robustness of **DAG-Jailbreak**." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Jailbreak Attacks and Defenses", "LLM Security", "DAG Dependency Analysis" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a72ebd010bd6fa5e5500e3ae6b801f313854e647.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/533738a8109022bc994631d98aa28fbcf04e917b.zip" }, "title": { "value": "DAG-Jailbreak: Enhancing Black-box Jailbreak Attacks and Defenses through DAG Dependency Analysis" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xQVxo9dSID
Consistency Models Made Easy
main
Active
Consistency Models;Efficient Generative Models;Diffusion Models
generative models
3;5;5;6
4;4;4;4
3;2;3;2
2;2;2;2
2;2;2;4
4.75
4
2.5
2
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. Pseudo-Huber metric is adopted in the model. Is there any other alternative?\n2. In the experiments, it is better to provide more metrics other than FID only." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The approach is very efficent as they showed. It could be used to greatly mprove efficiency and\nperformance of CMs at a large scale." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the athors made the point that diffusion models can be viewed as a special case of CMs. Based on it, they fine-tuned a consistency model starting from a pretrained diffusion model and progressively approximate the full consistency condition to stronger\ndegrees over the training process. The experiments verified its efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main motivation of the paper is straightforward. It is hard for the reader to fullly trust their obverstaion that diffusion models can be viewed as a special case of CMs in practice. The data sets and metircs on the images generation are limited. More extensive experiments or analysis should be conducted to justify their claims." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "In addition to my questions already mentioned in the paper’s weaknesses, I have a few other minor questions and clarifications:\n- I do not count this as a weakness since CIFAR-10 and ImageNet 64×64 are standard benchmarks, but have the authors also considered evaluating their method on higher dimensional datasets (e.g., ImageNet 512x512, LAION) and/or latent diffusion models (e.g., Stable Diffusion)? It would certainly be interesting and further strengthen results to position ECT as the de facto standard training regime for CMs more generally.\n- Why do the authors use FD-DINOv2 for Figure 4 (left) and use FID for Figure 4 (right)? It seems inconsistent to use different metrics here.\n- In Figure 2, why is the diffusion pre-training compute for both CD and ECT less than that for SDE/DM? Are the authors using a different DM?\n\nOverall, should the authors adequately address my concerns and questions, I will consider increasing my score." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "**Results.**\nThe primary strength of this work lies in its empirical results, achieving state-of-the-art performance on standard image generation benchmarks. Overall, the experimental analysis is quite comprehensive by comparing against (and outperforming) recent and strong baseline methods in Table 1, and ablating some key design choices in the appendix. The ECT scaling laws in Section 4.2 are an interesting and underexplored direction in CMs, and the authors provide evidence of improved sample quality with increased training compute. If this were to also hold true for both higher dimensional datasets (e.g., ImageNet 512x512, LAION) and latent DMs (e.g., Stable Diffusion), it would have important implications for few-step generation at scale.\n\n**Insights.**\nFurthermore, some of the insights provided in the methodology of this paper are quite interesting. Specifically, Section 3.2 provides some interesting discussion on the limitations of training CMs from scratch, dubbed the “curse of CMs”, by bounding the single-step prediction error as a function of the granularity of discretization." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose easy consistency tuning (ECT), a method to train consistency models (CMs) using less compute than competing methods by initializing them from pre-trained diffusion models (DMs) and then fine-tuning them for few-step generation. The fine-tuning step enables faster convergence compared to training CMs from scratch (consistency training, CT) and avoids accumulating errors from a frozen teacher DM compared to consistency distillation (CD). Training CMs with ECT achieves state-of-the-art performance on both CIFAR-10 and ImageNet 64×64. The authors additionally explore scaling laws in the fine-tuning step, demonstrating performance gains with increased training compute." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Novelty.**\nDespite the strong results demonstrated in this work, I have questions about its novelty. The main methodological contribution is to initialize CM training (CT) with a pre-trained DM to enable faster convergence. In the setting of consistency distillation (CD) and DM distillation in general, it is already common practice to initialize the student from the weights of the teacher DM, effectively reducing distillation to a fine-tuning task to reduce computational requirements and facilitate convergence, similar to what’s being motivated here. The difference in this work compared to CD is to instead initialize a CM from a DM in the setting of CT, which alleviates DM error accumulation as in the case of CD. However, this seems like a rather simple extension of CMs and the connection between CMs and DMs does not seem particularly novel, so I would ask the authors to clarify their proposed novelty here.\n\nMoreover, the authors formulate CMs in continuous-time which was already done in the seminal CM work by Song et al. in Appendix B.2, so it’s not clear what the difference is here if any. Along this vein, I would also ask the authors to clarify the purpose of Section 3.1 and why it’s necessary to formulate CMs in terms of the differential condition since it currently reads as a seemingly disconnected derivation and it’s not clear from reading the paper why this formulation is needed for the rest of their method instead of using the standard discrete/continuous CM formulation by Song et al. In addition, the particular weighting function derived in Section 3.1, $w(t,r)=\\frac{1}{t-r}$, does not seem to be what the authors actually end up using in practice since, in Section 3.3, the authors go on to say “Instead, we consider decoupled weighting functions without relying on $p(r|t)$” in L311-L312. Can the authors please clarify these points and also update the manuscript accordingly.\n\n**Ablations.**\nGiven that the central contribution of ECT is initializing CMs from a DM, it would be important to see the performance of training ECT from scratch (i.e., w/o initializing from a DM) while keeping all other ECT design choices fixed, namely those outlined in Section 3.3 in terms of training schedule, loss function, and weighting function. Can the authors please provide this result in their rebuttal as it is a relevant ablation?\n\n**Easiness.**\nThe authors claim that their method makes CMs easy to train but, from a design perspective, various choices (e.g., Equation 15) do not necessarily seem more intuitive or “easier” compared to those in the original CT or in the follow-up iCT. I think it would be fair to say that ECT makes training CMs more accessible and more computationally efficient but it’s not clear that “easy” is the right characterization. Can the authors please clarify their position?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I think all of my questions are presented in the “weaknesses” part. If you can solve most of them well, I will raise my score." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper explores the concept of the \"curse of consistency,\" which presents an intriguing perspective.\n2. The method proposed in this paper achieves good performance.\n3. This paper discusses the “scaling laws”." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to improve the training efficiency of CMs by proposing three techniques: \"continuous-time schedule,\" \"dropout,\" and \"weighting function.\" The resulting method demonstrates good performance. Additionally, the paper explores a phenomenon called the \"curse of consistency\" as well as the scaling laws of ECT." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the \"curse of consistency\" is indeed fascinating, discussing only the upper bound fails to capture the true nature of errors. An increase in the upper bound does not necessarily indicate a corresponding increase in error.\n2. Since the primary advantage of your method lies in its training speed, I believe you may have overlooked an important scenario: \"pretraining + iCT tuning,\" as illustrated in Figure 2.\n3. The relationship between your primary observation and your main method is not clear. Specifically, I find it difficult to understand the necessity of proposing a \"continuous-time training schedule\" and a \"weighting function\" to address the \"curse of consistency\" problem. If none of your techniques are linked to the \"curse of consistency,\" what is the rationale for mentioning it, and how does this discussion relate to the overall objectives of your paper? In fact, the \"curse of consistency\" is not even addressed in your abstract.\n4. You should place the ablation studies of your techniques (Table 6) in your main paper instead of the appendix.\n5. The \"dropout\" technique appears to be quite significant; could you clarify why it is placed in the appendix?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Given the above weaknesses but also the potential impact of its contribution, I think that this paper barely misses the acceptance threshold of ICLR, hence my recommendation of \"5: marginally below the acceptance threshold\". My questions are detailed in the \"Weaknesses\" section. I am willing to update my score following the authors' response as I believe most of them can be answered with a revision of the paper during the discussion period.\n\nI would have two additional questions.\n- Consistency distillation is tested following the original setup of Song et al. (2023). How would it benefit from later improvements (even though presented only in the setting of consistency training) of iCT (Song & Dhariwal, 2023)?\n- I would be interested in hearing the authors' opinion on whether the presented model is a distillation model or not. This is not discussed explicitly in the paper." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper's strengths are immediate: by presenting a simple, easily actionable method to improve the performance of consistency models while significantly reducing their training cost, **the presented method has the potential to be used as a strong baseline for future research** in consistency models.\n\nThe paper is overall **well written**. The method is sufficiently motivated with the described insights on consistency training discretization, making diffusion pre-training an **organic improvement**. The **clarity** of its exposition, its available codebase -- that I could not assess in detail -- and, perhaps more importantly, its **simplicity** are important factors in its reusability.\n\nThe experimental results confirm the appeal of the new consistency training recipe with not only noticeable performance boosts, but also significant **efficiency gains**. The efficiency advantage is not only significant compared to standard consistency training, but even stronger compared to consistency distillation. Finally, even though no comparison with another method is provided, the presented scaling laws are an interesting addition." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper describes a new recipe for consistency model training. By noticing that consistency training at high discretization levels amounts to the training of a diffusion model, the authors propose to initialize the consistency model with the weights of a pre-trained diffusion model. The actual consistency model training phase is then adapted to interpolate between diffusion and continuous-time consistency training. Supplemented by additional changes to the metric and its weighting, this new recipe is shown to present favorable experimental properties: outperforming consistency distillation and training, as well as diffusion models, with a fraction of the training cost. This efficiency is highlighted in a study of the scaling laws of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While I believe this paper provides a valuable and actionable contribution, a few weaknesses, listed below in decreasing order of importance, prevent me from providing a positive recommendation.\n\n## Clarification of the novelty of some contributions\n\nThe idea of **initializing consistency models with a pre-trained diffusion model has already been considered** in the initial paper of Song et al. (2023, Appendix B.3), albeit in a restricted setting and without the experimental value highlighted in the present paper. Still, this should be discussed and may have consequences on the empirical study (see next weakness).\n\nSection 3.1 remains **vague whether the presented derivations are contributions of the present paper** or reformulated background from consistency models. My understanding is that these derivations, including the final loss function, are all already included in the original paper of Song et al. (2023): the differential equation appears in another form in their Appendix Remark 4, and the loss function closely resembles the original loss function (by removing scheduling information). This should be clarified.\n\n**The choice of weighting function appears as a reformulation** of Song & Dhariwal (2023), unless I misunderstood its description in the paper. The insight of the pseudo-Huber metric providing an adaptive weighting term is interesting, but I do not see how the proposed alternative (using the $\\ell_2$ as metric and modulate it with the weighting that the pseudo-Huber metric would have brought) is any different. Could the authors clarify this?\n\n## Lacking experimental insights\n\nWhile appealing, the presented experimental results lack additional insights and ablations to fully support the claims of the paper and highlight its added value w.r.t. prior work. Two main components are missing.\n\nThe first missing component is a **full fledged ablation study**. One is already included in Appendix Section B (Table 5). However, given this preliminary result showing that the simple diffusion pre-training (already considered in prior work, cf. previous weakness) brings most of the improvements, I think the paper would benefit from including in its main part an augmented ablation study in the same experimental setting as Section 4. This ablation study should then be discussed to assess the significance of each contribution in the new training recipe.\n\nThe second missing component is a **study of the evolution of the methods' performance w.r.t. training time/iterations**. As is, baseline and ablation results are presented at a fixed number of iterations. Plotting their performance w.r.t. the number of iterations would enable a more comprehensive efficiency comparison, as well as justify the choice of stopping time for baselines in e.g. Table 1.\n\n## Lack of moderation and rigor in some assertions\n\nThis is a less important issue that still deserves to be addressed. The following statements lack moderation and/or rigor.\n- In Section 1, it is stated that \"the speedup achieved by these sampling techniques usually comes at the expense of the quality of generated samples\". This is partially incorrect as some works like the one of Karras et al. (2022) significantly reduced the number of required model evaluations while maintaining performance.\n- It is incorrect that the proposed loss function of Eq. (12) \"generalizes Song et al. (2023)’s discrete training schedule to continuous time\". The proposed loss indeed relies on discretizing the aforementioned differential equation. Instead, this loss is exactly the consistency training loss of Song et al. (2023) untied from its specific discretization grid.\n- Writing $\\Delta t \\to \\mathrm{d}t$ instead of $0$ is confusing.\n- In Section 4.1, the proposed model is said to \"only [require] 1/1000 of [Score SDE's] inference cost\". This is correct but misleading as state-of-the-art diffusion models no longer require thousands of model evaluations. This statement should be toned down.\n\n## Minor issues\n\n- The diffusion ODE of Eq. (2), from Karras et al. (2022), requires that $\\mathbf{f} = 0$ in Eq. (1).\n- $f(\\mathbf{x}_t, t)$ is said to be \"a denoising function\" after Eq. (3). This statement should be more detailed.\n- To my knowledge, in Karras et al. (2022)'s timestep schedule, $\\rho = 7$ instead of $0.7$ on line 134.\n- To my understanding, a part of the numbers in Table 1 were not obtained by the authors but are reported from other papers. This should be clarified.\n- $\\epsilon$ in Eq. (19) should be $c^2$ to be consistent with the rest of the paper." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Consistency Models Made Easy" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024consistency,\ntitle={Consistency Models Made Easy},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xQVxo9dSID},\nnote={under review}\n}" }, "abstract": { "value": "Consistency models (CMs) offer faster sampling than traditional diffusion models, but their training is resource-intensive. For example, as of 2024, training a state-of-the-art CM on CIFAR-10 takes one week on 8 GPUs. In this work, we propose an effective scheme for training CMs that largely improves the efficiency of building such models. Specifically, by expressing CM trajectories via a particular differential equation, we argue that diffusion models can be viewed as a special case of CMs. We can thus fine-tune a consistency model starting from a pretrained diffusion model and progressively approximate the full consistency condition to stronger degrees over the training process. Our resulting method, which we term Easy Consistency Tuning (ECT), achieves vastly reduced training times while improving upon the quality of previous methods: for example, ECT achieves a 2-step FID of 2.73 on CIFAR10 within 1 hour on a single A100 GPU, matching Consistency Distillation trained for hundreds of GPU hours. Owing to this computational efficiency, we investigate the scaling laws of CMs under ECT, showing that they obey the classic power law scaling, hinting at their ability to improve efficiency and performance at larger scales. Our code will be made publicly available, making CMs more accessible to the broader community." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Consistency Models", "Efficient Generative Models", "Diffusion Models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/f159077ead13571d1ae88fbfb2b30cd7e97a76c4.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/ba04aeeda246b34ef9e3c5e46a41c8e42c262392.zip" }, "title": { "value": "Consistency Models Made Easy" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xQit6JBDR5
Look Around and Find Out: OOD Detection with Relative Angles
main
Active
out-of-distribution;out-of-distribution detection;decision boundaries
alignment, fairness, safety, privacy, and societal considerations
5;5;5;5
4;5;3;4
2;2;3;3
2;2;3;3
3;3;2;3
5
4
2.5
2.5
2.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Mostly, my concerns are on performance gain and the experiments on different architectures. If authors can convincingly address my concerns, I am willing to change my rating." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-written and easy to follow.\n\n2. Relying on the angle between feature representations and the decision boundary seems to be novel.\n\n3. The geometric interpretation of the presented method is convincing.\n\n4. The presented method can easily be integrated into existing frameworks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes to calculate the angle between the feature representation and the decision boundary, viewing from the mean of ID representations, to compute a score for identifying OOD examples. The method is evaluated on two popular benchmarks: CIFAR100 and Imagenet for OOD detection." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. I am somewhat skeptical about the performance gain. Although the paper claim performance gains across both benchmarks, the improvement is marginal for CIFAR100 (0.8% FPR95) and only evident on average. Looking at Table 1 and Table 2, the method lags behind other methods on an individual basis. It’s important to discuss why the method does not generalize well on an individual basis.\n\n2. The method is only compared on ResNet architectures. How does it perform on other recent architectures, such as Vision Transformers? Given that the method relies heavily on feature and decision boundaries, validating it on diverse architectures is essential to confirm its architecture-agnostic and plug-in characteristics.\n\nMinor Fixes: Please review the references. Some include only the publication year without the publication venue." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Why is the CSI baseline used for CIFAR10 OOD benchmark, but not used for imageNet benchmark?\n\n2. Did you consider the application of LAFO on multi-modal foundation models, such as CLIP?\n\n3. The feature is evaluated through the composed function $f_1\\circ...\\circ f_{L-1} \\circ g $, can you explain the reason why using this way or show some references for this? Did you consider other ways to represent features?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The idea sounds novel. The paper introduces a novel angle-based metric for OOD detection, which measures the angle between feature representations and decision boundaries relative to the mean of in-distribution (ID) data.\n\n2. This paper conducts extensive experiments to validate the proposed approach, including the standard benchmarks, and demonstrates its flexibility by incorporating it into ensemble methods and combining it with activation shaping algorithms.\n\n3. This paper also explained the connection between LAFO and the similar approach fDBD." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a novel method for out-of-distribution (OOD) detection based on feature representations in neural networks. The proposed approach, LAFO (Look Around and Find Out), introduces an angle-based metric that measures the angle between feature representations and decision boundaries relative to the mean in-distribution (ID) feature. This approach leverages the relationship between feature vectors and decision boundaries to differentiate between ID and OOD samples effectively." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The exploration of ID statistics beyond the mean is limited. As OOD detection can benefit from a richer representation of ID statistics, does the \"ID mean\" refer to the mean across all classes? How about class-specific means or other statistical summaries? If the author includes these experiments or analyses, the paper will be strengthened. \n\n2. The experiments do not sufficiently address why and how LAFO enhances ensemble performance compared to other methods. It would be beneficial to see a more detailed analysis of how the angle-based scores behave in various ensemble settings, such as different architectures or training losses, to better understand when and why LAFO performs optimally." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The effectiveness of LAFO in scenarios with severe ID class overlapping?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The angle-based approach relative to ID mean is novel in differentiating ID and OOD samples.\n- LAFO’s lack of hyperparameters simplifies its use in practical scenarios and avoids overfitting issues associated with tuning.\n- The model achieves impressive results on CIFAR-10 and ImageNet, showcasing its scalability from smaller to larger datasets.\n- LAFO can be combined with other activation shaping methods, demonstrating flexibility in enhancing model confidence scores." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents Look Around and Find Out (LAFO), a novel approach for out-of-distribution (OOD) detection using angle-based metrics. By calculating angles between feature representations and decision boundaries in relation to the mean of in-distribution (ID) features, LAFO improves OOD detection performance by leveraging the geometric relationships within feature space. The proposed method demonstrates robust performance across multiple benchmarks (CIFAR-10, ImageNet), significantly reducing false positive rates (FPR95) compared to state-of-the-art methods. Additionally, LAFO is hyperparameter-free, scale-invariant, and compatible with ensemble models, which enhances its practical utility." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While effective, using only the ID mean for centering may limit adaptability across highly variable datasets. Incorporating other statistics could improve robustness.\n- The experiments focus on ResNet architectures. Additional comparisons with transformer-based or CLIP-based architectures could provide more insights.\n- The paper does not fully explore scenarios where LAFO may struggle, such as in cases with minimal separability between ID and OOD distributions.\n- Although LAFO is efficient, the paper could address its performance in real-time or resource-constrained settings to provide a more comprehensive view." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "no more question" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The performance is good and the analysis is easy to understand.\nThe metric is scale-invariant, allowing ensemble for better performance.\nThe experiment is comprehensive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposed a novel angle-based metric for OOD detection that is computed relative to the in-distribution structure. They demonstrate that the angles between feature representations and decision boundaries, viewed from the mean of in-distribution features, serve as an effective discriminative factor between ID and OOD data. Experiments on CIFAR10 and ImageNet shows SOTA performance compared to other detection methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. From Figure 1, the angle \\alpha is helpful for better distinguishing ID and OOD data. And there lacks a comparison between the sine of \\alpha, the sine of \\theta, and the division of them. I think only the sine of \\alpha in Figure 2 is not convincing to demonstrate that the angle \\alpha is not very informative for ID and OOD separation.\n\n2. For experiments, I think the author should report detection results on a vanilla trained model, which is a more common and practical setting for post-hoc detection methods. The current results are all based on supervised contrastive training models.\n\n3. For experiments, it should compare with ReAct method on ImageNet OOD benchmark (in Table 2), since my empirical experience tells that ReAct always shows remarkable performance on ImageNet dataset." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024look,\ntitle={Look Around and Find Out: {OOD} Detection with Relative Angles},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xQit6JBDR5},\nnote={under review}\n}" }, "abstract": { "value": "Deep learning systems deployed in real-world applications often encounter data that is different from their in-distribution (ID). A reliable system should ideally abstain from making decisions in this out-of-distribution (OOD) setting. Existing state-of-the-art methods primarily focus on feature distances, such as k-th nearest neighbors and distances to decision boundaries, either overlooking or ineffectively using in-distribution statistics. In this work, we propose a novel angle-based metric for OOD detection that is computed relative to the in-distribution structure. We demonstrate that the angles between feature representations and decision boundaries, viewed from the mean of in-distribution features, serve as an effective discriminative factor between ID and OOD data. Our method achieves state-of-the-art performance on CIFAR-10 and ImageNet benchmarks, reducing FPR95 by 0.88% and 7.74% respectively. Our scoring function is compatible with existing feature space regularization techniques, enhancing performance. Additionally, its scale-invariance property enables creating an ensemble of models for OOD detection via simple score summation." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "out-of-distribution", "out-of-distribution detection", "decision boundaries" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/f24a1d9292bd9697ccf0dc4ce85bc32f541c5420.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/76518446dd63ce9231ab26d82c08b2d4bab4af87.zip" }, "title": { "value": "Look Around and Find Out: OOD Detection with Relative Angles" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xRDYDI6Rc9
Reliability-Aware Preference Learning for LLM Reward Models
main
Active
preference learning;RLHF;human models;scalable oversight
alignment, fairness, safety, privacy, and societal considerations
3;3;5;5
4;4;4;4
2;2;2;2
2;2;2;3
1;2;2;3
4
4
2
2.25
2
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- The TRUE dataset is essentially used to estimate the likelihood of human making an error on a specific question. This seems challenging and depend on annotator skill level and domains. How good is the model at this task and how well does it generalize?\n- Given the estimated error probability from TRUE, a simpler baseline would be to just skip examples whose annotation has an error probability beyond certain threshold." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- This paper tackles an important problem in the current paradigm of learning from human feedback.\n- The LIE dataset is useful for evaluating biases in reward models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper tackles the problem of learning from unreliable human annotations in RLHF. Specifically, it is assumed that there is some chance that the human label is incorrect / has low confidence. The reliability is then incorporated into the reward learning objective through 1) temperature scaling of the reward and 2) interpolation with a random guessing distribution. To evaluate the proposed approach, a new dataset is also built to measure reliance on length when evaluating answer correctness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- There are two proposed ways to incorporate reliability into the objective, however: 1) temperature scaling would not work since it doesn’t change the objective. The model would in fact increase the difference between the rewards to compensate for the temperature. It can be applied at inference time, e.g., when using the RM in RLHF, though. 2) the interpolation method is not novel and has been used in (Chistiano et al, 2017), which is also pointed out by the author.\n- The paper seems incomplete, ending abruptly at experiment results. From a presentation perspective, the description of the motivation and intuitive for the proposed methods is quite verbose but the technical details and result description are thin." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Could the authors clarify whether they are doing DPO training directly or they only do reward modeling? If it is the latter case, it seems the method could be applied to DPO as well. I would be curious about the downstream performance then.\n2. In Table 2, the CLR score for Normal PL is only 0.02, which is low. Does it mean that the unreliability issue is indeed not severe in realistic datasets?\n3. In Eq. (6), what is the purpose of the second term \"(1-p) * 0.5\"? It seems it is just a constant offset which does not affect the training?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Reward modeling is key to the success of model alignment and this work investigates the reliability of preference labels which is overlooked by previous works.\n2. The word design a well-curated dataset LIE and evaluation metric to demonstrate that the unreliable preference labels indeed lead to reward models that favor length over correctness.\n3. The work proposes reasonable methods to account for unreliability in reward modeling and verify their effectiveness." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The work aims to improve the reliability of reward modeling for better model alignment. First, they construct the LIE dataset where incorrect responses tend to be longer. The initial experiment shows that a reward model can be successfully misguided to favor longer incorrect responses. Based on this observation, they propose two methods, Reward Adjustment and Probability Adjustment, to account for the unreliability of preference for reward modeling. Both methods work by deemphasizing unreliable preference pairs in the training loss. The unreliability is estimated by either human or model-generated scores. They also construct the TRUE dataset to further calibrate the reliability measures. Experimental results show that Probability Adjustment with calibrated scores yields the best reward model on both their LIE dataset and in-the-wild dataset HelpSteer2." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proposed method seems to heavily rely on human annotation (the TRUE dataset) to calibrate the reliability measure, without which the CLR scores are not higher than the baseline (e.g., the LLM-based methods vs. Normal PL in Table 1).\n2. How the reward models would affect the model alignment is not verified. This makes the whole study of this work less motivated.\n3. The paper ends a bit rush. I understand there is already extensive discussion throughout the paper but still a better organization of the conclusion session would be helpful." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* It is not entirely clear to me which of the proposed mitigation methods the authors would recommend to use, based on the experimental results?\n* The use of the TRUE dataset is somewhat unclear to me, even after reading the appendix. Can the same effect be achieved if a validation set of the LIE dataset had been used to calibrate, given that this dataset also has correctness labels?\n* It is hard to connect the plots in Figure 3a, b to the text. Can the authors detail how we can conclude that annotators prefer longer answers, based on the info in these plots?\n* About the LIE dataset: in the appendix we can read that one of the authors checked for correctness. How was this done? How many changes were made? How was reliability measured, as one cannot measure inter-annotator agreement with one annotator?\n* The abbreviations LC, LI, SC, SI are used without introduction. I assume those stand for ‘Long Correct’, ‘Long Incorrect’, ‘Short Correct’, and ‘Short Incorrect’?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* **Paper addresses an important problem:** it is important to work with reliable human preference judgements, and thus to quantify the reliability of the judgements, and to mitigate any unreliability when needed.\n* **Collected dataset can be useful for future work:** the collected LIE dataset can be used by future work to check how biased annotators or reward models are against length bias." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work aims to make preference learning more robust to noise in human preference labeling, specifically focussing on length bias. \n\nAs a first step, the authors collect a dataset called LIE (Length Incentivized Evaluations) that contains queries with 4 types of answers: (1) Short Correct, (2) Short Incorrect, (3) Long Correct, and (4) Long Incorrect. \nNext, 20 annotators are asked to provide preferences for answer pairs in the dataset, and it is confirmed that annotators prefer longer answers over correct answers (as found in earlier work as well).\nThen the authors train a standard reward model on the collected preferences, which has the same length bias.\n\nFinally, the authors try to mitigate the bias by “reliability aware preference learning” (RAPL). The idea is to either adjust the reward or the bolzmann probability based on the difficulty of the questions. The authors experiment with 2 approaches to set the values: (1) Annotator self-reported confidence, and (2) an LLM-based autograder. Calibration is done through another collected dataset (TRUE), that contains annotator answers for questions with known answers.\n\nBased on the experiments, the effectiveness of either method is somewhat unclear. Adjusting the bolzmann probability based on models fine-tuned on the TRUE dataset seems to work best, although the authors mention that “this strategy doesn’t work for just any value as the model trained using the average confidence value doesn’t perform too well” (line 511-512)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* **Effectiveness of the proposed solution is unclear:** many of the proposed solutions seem to rather increase the length bias. For the method that decreases the length bias, the authors write: “this strategy doesn’t work for just any value as the model trained using the average confidence value doesn’t perform too well”. This makes it unclear how well the proposed mitigation strategies work.\n* **Especially the second half of the paper is somewhat unclear.** This is enforced as the paper ends somewhat abruptly, without a clear conclusion (the last section is a section called ‘experiments’). I have a few specific questions, that I summarize below in the question box.\n* The LIE dataset is useful to quantify length bias, but I would rephrase contribution 2 (“we find that reward models trained on unreliable human feedback tend to place higher weight on obvious proxies like length and less weight on factual correctness”) to “we *confirm* that ...” to do better justice to findings from prior work." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. (This isn't a weakness, just a comment) The name for the TRUE dataset is cute given the other dataset was called LIE, but it feels contrived. 'Testing Reasoning and Understanding Errors' is very generic and reasoning with LLMs is a much more focused research area than sampling examples from various datasets. \n\n2. The proposed mitigation strategies read as 'practical' ways to incorporate the reliability score by minimally modifying the objective. Can you provide some theoretical justification/intuition for the different methods? \n\n3. There's an assumption of generalizability of the reliability scores learned from the TRUE dataset to other domains. This seems reasonable because the scores from TRUE are better than the self-reported confidence and LLM-scores in Table 1 but it is underperformed by the constant score assignments." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The premise for the work, that annotator reliability can be a big factor in reward model performance, is solid and grounded in issues that have also been observed in ML research in other domains. Incorporating signal about this reliability into the model training, essentially connecting the data collection process to the model training, is intuitive. \n\n2. The analysis piece on the LIE dataset is well done and a compelling data contribution given the controlled nature of the collected responses. \n\n3. The proposed mitigation strategies are interesting and pretty novel and can be incorporated into the existing training paradigm of reward models with minimal editing of the objective. Empirically testing each combination of methods of assigning reliability scores and incorporating these during training is helpful to the reader." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper argues that a source of error in current reward models is that the collected preference judgments used to train them can reflect biases in annotator preferences during data collection. To verify this hypothesis, the authors collect a dataset (based on TruthfulQA), Length Incentivized Evaluation (LIE), that consists of prompts paired with 4 responses that are {short + (factually) correct, short + incorrect, long + correct, long + incorrect} and train a reward model ($R_{LIE}$) on collected human preferences on examples where the length and correctness are negatively correlated. They then evaluate the reward scores assigned by $R_{LIE}$ (and another reward model trained on a held-out dataset HelpSteer2, $R_{HS2}$), on each of the 4 kinds of responses. They observe that both $R_{LIE}$ and $R_{HS2}$ assign higher scores to the longer responses, regardless of correctness due to the unreliability of the human annotations (both collected judgments for this paper, as well as those in help-steer).\n\nIn order to mitigate this issue, the authors propose to augment the traditional learning objective for reward models with a penalty for the annotator's reliability on that particular example. This is incorporated either with the $\\beta$ parameter or as a corrective regularization term in the contemporary loss function. \n\nIn order to obtain reliability scores for examples, the authors experiment with three different ways: (1) self-reported annotator confidence, (2) a synthetic LLM-assigned grade with CoT prompting, (3) a learned classifier score that involves (a) collecting a dataset of 1000 new preference judgments on examples with a correct answer from various benchmarks - MMLU, BigBench, etc. (b) fitting a transformer-based classifier on a binary label {0/1} based on whether the annotator selected the correct label (c) the probability assigned by this classifier is used as a reliability score for inference.\n\nEmpirical results show that incorporating reliability into model training improves reward model predictions in some cases, with uneven trends between the particular ways of assigning reliability scores and incorporation methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper structure is hard to follow. The first half is clear with the collection of the LIE dataset as an analysis tool to confirm the proposed issue. The proposed mitigation strategies section could be improved as Section 6.1 is the methods for incorporating the reliability scores, 6.2 is the first 2 ways of obtaining reliability scores, then 6.3 is only the third way i.e. the TRUE dataset and the paper abruptly ends after 6.4 which is the experiments on mitigation without a proper conclusion. This is fixable but the current structure feels incomplete. \n\n2. The aforementioned issue is exacerbated because it's hard to obtain a clear takeaway from the results themselves. For instance, there doesn't seem to be a clear trend between the methods of incorporating the reliability scores via the $\\beta$ parameter or the regularization scheme in Table 1 across the three reliability measures. Similarly, there also doesn't seem to be a clear winner among the three schemes of assigning scores, particularly because many of these combinations are outperformed by the normal preference learning and the reliability-aware preference learning with constant scores. \n\n3. The paper lacks many details to reproduce the findings including specifically (a) L.469-473 - the fitting of scores is done to which set of examples? Those from True? (b) L.488-492 - What was the sampling scheme for creating this dataset? How do you verify that each example has a single correct answer? (c) For Table 1, what was the training scheme for the models trained on TRUE? What was the variance across different seeds/hyperparam sweeps?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "By explicitly accounting for when humans give unreliable feedback, we can learn reward functions that better align with human values." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024reliabilityaware,\ntitle={Reliability-Aware Preference Learning for {LLM} Reward Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xRDYDI6Rc9},\nnote={under review}\n}" }, "abstract": { "value": "Reward functions learned from human feedback serve as the training objective for RLHF, the current state-of-the-art approach for aligning large language models to our values. However, in practice, these reward models fail to robustly capture our desiderata, often attributing more value to features such as output length or agreement with the user and less value to important features like factual correctness. A major reason is that human annotators provide feedback that is an unreliable reflection of their true preferences because of knowledge gaps, limited resources, cognitive biases, or other factors. We focus on making preference learning robust to unreliable feedback by explicitly modeling the knowledge and judgment of annotators. In particular, we estimate reliablity scores for each provided pairwise comparison and incoporate them into the implicit human model used in RLHF, DPO, and other alignment techniques, a technique we call Reliability Aware Preference Learning (RAPL). To test our approach, we introduce the Length Incentivized Evaluations dataset as a setting in which annotators are particularly likely to provide unreliable feedback. Then, we curate the Testing Reasoning and Understanding Errors dataset for training models to predict reliability scores. We find that traditional preference learning on the LIE dataset and other commonly used RLHF datasets leads to models that place far more weight on output length than accuracy. In contrast, RAPL results in models that better capture the true values of annotators." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "preference learning", "RLHF", "human models", "scalable oversight" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/15f4441b51d0864da9badcac93c0ed87bc7a8994.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Reliability-Aware Preference Learning for LLM Reward Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xRi8sKo4XI
On Unsupervised Prompt Learning for Classification with Black-box Language Models
main
Active
Prompt Learning;Black-box Language Models;In-context Learning
foundation or frontier models, including LLMs
1;3;3;3;5
5;3;4;4;3
2;2;2;2;3
1;1;2;2;2
1;2;2;2;3
3
3.8
2.2
1.6
2
-0.845154
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See the weaknesses above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is well-written and easy to follow.\n2. The motivation to use pseudo-labeled data as demonstrations is clear and reasonable to understand.\n3. The experiments are relatively extensive on several benchmarks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces an approach to unsupervised prompt learning for classification tasks using black-box large language models (LLMs), named as Prompt learning with Pseudo-labeled Demonstrations (PPD). It proposes to generate pseudo-labels from unlabeled data and using these as in-context demonstrations to learn prompts effectively. The authors claim that their approach can be used to improve performance on downstream tasks. The evaluations were conducted on several public benchmarks to demonstrate the effectiveness of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The novelty of the proposed method is very limited. The idea of using unlabeled data as pseudo labels is very intuitive and covered in many previous papers, including but not limited to [1,2,3,4]. Even though the method of selecting pseudo-labeled samples is slightly different in these papers, the contributions are relatively marginal.\n2. The proposed PPD to use VR-PGE optimization is complex for classification tasks. I would like to see if there are complexity analyses compared to other baselines.\n3. The performance showed in the experiments are not convincing for such overly complicated method. I don't see much improvement of PPD even compared with vanilla ICL baselines. So I doubt the real efficiency and effectiveness of the proposed method in real practices.\n\n[1] Abburi, Harika, et al. \"Generative ai text classification using ensemble llm approaches.\" arXiv preprint arXiv:2309.07755 (2023).\\\n[2] Zhang, Yunyi, et al. \"Teleclass: Taxonomy enrichment and llm-enhanced hierarchical text classification with minimal supervision.\" arXiv preprint arXiv:2403.00165 (2024).\\\n[3] Zhang, Yunyi, et al. \"PIEClass: Weakly-supervised text classification with prompting and noise-robust iterative ensemble training.\" arXiv preprint arXiv:2305.13723 (2023).\\\n[4] Mirza, Muhammad Jehanzeb, et al. \"Lafter: Label-free tuning of zero-shot classifier using language and unlabeled image collections.\" Advances in Neural Information Processing Systems 36 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. My major concern is the main loss function. Can you clarify the rationale behind the main loss function minimizing the distance between the in-context predictions and the zero-shot predictions in Eq. (1)? If this loss is truly optimized, the trivial solution would just encourage the model to ignore the $z$ and $D_l$. That is, the in-context prediction with the learned prompt z, i.e., $f(x_l, z, D_l)$, would approximate the zero-shot performance without the learned prompt, i.e., $f(x_I, \\emptyset, \\emptyset)$. Then what benefits can the model have from learning $z$ and selecting $D_l$? I would suggest a better loss function (for example, preference learning loss like DPO), where the model would prefer the outcome of $f(x_l, z, D_l)$ over the outcome of $f(x_I, \\emptyset, \\emptyset)$, therefore you encourage the model to leverage $z$ and $D_l$ to learn a better output.\n\n2. Why is the entropy term necessary? More explanations should be added on Page 5, Line 258-260. The ablation of the entropy term only on SST-2 in Fig 3 (b) is not convincing. I'd suggest you perform more ablation experiments on other datasets (with an unsaturated performance of direct prompting). Moreover, different $\\alpha$ will also influence the accuracy. Can you clarify your strategy for selecting $\\alpha$ to balance the trade-off of accuracy and variance? I'd suggest you perform ablation studies on the effect of $\\alpha$?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Originality: This paper combines several existing ideas together to study a new setting of unsupervised prompt learning on unlabeled data. Although most components (i.e., discrete prompt learning, in-context prediction, pseudo-labeling) used in the proposed method are not new, integrating them is new.\n\nQuality: The overall quality is decent. Experiments and analysis have been conducted on two popular benchmarks to evaluate the effectiveness of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces an unsupervised prompt learning method using black-box, proprietary LLMs (without access to the model parameters). The method integrates several techniques: (1) pseudo-labeling of unlabeled data; (2) learning discrete prompt tokens; and (3) in-context prediction. Specifically, it first uses a black-box LLM to obtain pseudo-labeled data, then combines pseudo-labeled in-context examples and discrete prompt tokens sampled from a vocabulary to make predictions, and optimizes (1) an entropy term and (2) a consistency loss between the in-context predictions and the zero-shot predictions to update the policy of sampling prompt tokens. The main contribution of this paper lies in the proposed method that leverages proprietary LLMs (i.e., GPT-4 and GPT-4o-mini) to obtain high-quality pseudo-labeled data for downstream tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Clarity: The clarity needs improvement. (1) The intuition of optimizing the consistency loss term and the entropy term needs to be clearly explained. See the detailed questions below. (2) The math notation can be further simplified to improve the readability. \n\nSoundness: (1) The performance of the GPT-4 family on the two selected benchmarks is highly saturated, making it hard to see a significant improvement in the method over direct prompting. Direct prompting with GPT-4 on many tasks (like MNLI, SST-2, MRPC, WNLI, and RTE) even has ~90 percent accuracy. It’d be better to use another benchmark dataset that has a relatively lower performance with direct prompting. (2) In Tables 2 & 3, ICL is consistently worse than Direct, which seems contradictory to the prior studies and this paper’s claim that in-context examples are helpful to PPD. (3) There are missing ablation studies, comparing PPD, PPD without the learned prompt tokens, and PPD without the in-context examples. This seems more important than the reported ablation studies since the key contribution of this method combines (1) discrete prompt tokens and (2) ICL." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Can you elaborate on the computational costs involved, particularly with in-context demonstrations, and how they scale with larger datasets?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper presents a unique approach to prompt learning without labeled data, which is particularly valuable in scenarios with limited labeled resources.\n- The use of pseudo-labeled data as in-context demonstrations during training is a clever adaptation of LLM capabilities, making prompt training more consistent with usage.\n- The paper conducts comprehensive evaluations on diverse datasets and includes several baseline comparisons, demonstrating the method’s performance across a wide range of tasks. GLUE is a bit outdated though.\n- Detailed ablation studies illustrate the impact of various components, such as the choice of in-context demonstrations and confidence threshold for pseudo-labeling." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces an unsupervised prompt learning method tailored for classification tasks with black-box large language models. It proposes a technique where the prompt and pseudo-labels are learned concurrently, leveraging pseudo-labeled data as in-context demonstrations. This approach contrasts with traditional methods that rely on labeled data for prompt learning. The authors first select high-confidence pseudo-labeled data using the LLM’s predictions and then use these for further prompt and label refinement, aiming to reduce inconsistencies between prompt-learning and prompt-using phases. The proposed method, termed Pseudo-labeled Prompt Demonstration (PPD), is evaluated on GLUE and MMLU benchmarks, showing improved performance over several baseline methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The method would be more appreciated in a previous-year conference. However, as the paradigm has shifted, the proposed method is not technically novel or significant.\n- The method’s performance depends on the accuracy of pseudo-labeling, which may be unreliable for challenging datasets or tasks with highly ambiguous labels. There’s a potential risk of propagating incorrect labels.\n- As each training sample relies on in-context demonstrations, the approach may struggle with large datasets or scenarios requiring extensive pseudo-label generation, possibly leading to increased computational cost.\n- Comparing this method with other non-prompt-based unsupervised classification techniques could provide additional insights.\n- The optimization process, including sampling tokens and updating distributions with VR-PGE, could be difficult to reproduce or adapt to other LLM settings." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "(1) One important question is why we need \"unsupervised prompt learning.\" Could prompting with less but higher-quality data serve as an alternative solution? Additionally, might regular fine-tuning with few-shot data be another viable option? If the authors intend to highlight the advantages of this method, it would be beneficial to include relevant experiments and comparative analysis." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper has several notable strengths, which are listed below:\n\n1. The overall motivation behind this paper is sound. The LLMs have demonstrated superior annotation capabilities compared to humans, making it a logical step to consider using LLMs for labeling a larger portion of unlabeled data.\n\n2. The paper features effective visual illustrations. The figures clearly convey the core concepts, enhancing the understanding.\n\n3. The experiments conducted in this paper are based on the current state-of-the-art LLMs and incorporate a multi-perspective analysis, which appears to be thorough and comprehensive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes to perform unsupervised prompt learning for classification with black-box LLMs, where the learning parameters include both the prompt and the pseudo labels of unlabeled data. (1) the prompt is modeled as a sequence of discrete tokens, each token having its to-be-learned categorical distribution. (2) To learn pseudo labels, authors first identify several reliable pseudo-labeled data and then use these data as demonstrations for ICL to annotate more unlabeled data. By performing prompt training using these data, the model can perform downstream tasks. Experiments on various benchmark datasets show the effectiveness of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Several weaknesses need to be addressed.\n\n(1) When referring to \"black-box LLMs\" I believe the authors mean \"in-house LLMs\" in contrast to open-source LLMs. It would be helpful if they could clarify this key distinction.\n\n(2) It is a fact that given unlabeled data, one can perform various tasks. For example, with the text \"I like the movie. I do enjoy the storyline,\" one could perform sentiment analysis by labeling it as \"positive,\" and one could also conduct natural language inference by labeling it as \"entailment.\" The paper does not mention an initial human labeling process to indicate specific tasks, as it directly starts with several reliable pseudo-labeled data generated by LLMs. I'm curious if this implies that the biases of the LLMs influenced the defined tasks.\n\n(3) It would be beneficial if the authors could further explain the prompt training process, detailing how to train both the prompts and pseudo-labels simultaneously.\n\n(4) This idea seems somewhat similar to works related to self-learning (e.g., iPET, iterative prompt tuning). The key difference appears to be that the data used in this approach is annotated by LLMs. It would help if the authors could clarify the key differences between their work and these other approaches." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N.A." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see the weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is well-organized and clearly presented.\n\n2. Extensive datasets from GLUE and MMLU are used to validate the proposed PPD approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes an unsupervised prompt learning method for text classification with black-box LLMs, combining unlabeled data, in-context learning (ICL), and prompt tuning to enhance classification performance. While the work is well-presented in detail, its contribution is relatively incremental, with limited novelty." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Limited Novelty (major concern): The approach lacks novelty, as key components—selecting confident pseudo-labeled data, initializing prompts with categorical distribution, using KNN for demonstrations, and updating parameters with cross-entropy and VR-PGE—are directly adopted from existing methods with minimal modifications. The contribution appears to be a straightforward combination of established techniques, without addressing specific challenges in these components or their integration.\n\n2. Inaccurate Experimental Analysis: The claim in Lines 414–416 that PPD (k=3) consistently outperforms Direct (Table 3) is inaccurate; Direct performs better on MNLI, MRPC, CoLA, WNLI, and RTE.\n\n3. Incomplete Ablation Study: The ablation study lacks thoroughness. Beyond analyzing hyperparameters, loss functions, and LLMs, the impact of omitting individual PPD components should be assessed. Although some data is in Tables 1 and 2, further explicit analysis is needed.\n\n4. Dataset Statistics (minor concern): Dataset statistics should be included.\n\n5. Font Size in Figures (minor concern): The font size in all images, particularly in Figure 3 (a, b, and c), should be increased for readability." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024on,\ntitle={On Unsupervised Prompt Learning for Classification with Black-box Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xRi8sKo4XI},\nnote={under review}\n}" }, "abstract": { "value": "Large language models (LLMs) have achieved impressive success in text-formatted learning problems, and most popular LLMs have been deployed in a black-box fashion. Meanwhile, fine-tuning is usually necessary for a specific downstream task to obtain better performance, and this functionality is provided by the owners of the black-box LLMs. To fine-tune a black-box LLM, labeled data are always required to adjust the model parameters. However, in many real-world applications, LLMs can label textual datasets with even better quality than skilled human annotators, motivating us to explore the possibility of fine-tuning black-box LLMs with unlabeled data. In this paper, we propose unsupervised prompt learning for classification with black-box LLMs, where the learning parameters are the prompt itself and the pseudo labels of unlabeled data. Specifically, the prompt is modeled as a sequence of discrete tokens, and every token has its own to-be-learned categorical distribution. On the other hand, for learning the pseudo labels, we are the first to consider the in-context learning (ICL) capabilities of LLMs: we first identify reliable pseudo-labeled data using the LLM, and then assign pseudo labels to other unlabeled data based on the prompt, allowing the pseudo-labeled data to serve as in-context demonstrations alongside the prompt. Those in-context demonstrations matter: previously, they are involved when the prompt is used for prediction while they are not involved when the prompt is trained; thus, taking them into account during training makes the prompt-learning and prompt-using stages more consistent. Experiments on benchmark datasets show the effectiveness of our proposed algorithm. After unsupervised prompt learning, we can use the pseudo-labeled dataset for further fine-tuning by the owners of the black-box LLMs." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Prompt Learning", "Black-box Language Models", "In-context Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1cddcd32bd0f444fdf37405c558d557628a14b01.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "On Unsupervised Prompt Learning for Classification with Black-box Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xS4XOS4NQ5
General Preference Modeling with Preference Representations for Aligning Language Models
main
Active
preference modeling;preference optimization;reinforcement learning from human feedback
foundation or frontier models, including LLMs
3;5;6;6
4;3;3;3
2;2;3;2
2;2;3;3
3;3;3;2
5
3.25
2.25
2.5
2.75
-0.942809
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the weakness. My major concern is about the commonness or generalization of cyclic preference in LLM alignment (chatbot scenario), and the relatively weak alignment performance using GPM." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper studies the intransitive (cyclic) preferences problem, and analyzes the weakness of traditional BT and PairRM reward models. This problem in preference rewarding is very interesting and needs more efforts from the community.\n2. The proposed GPM method is well-motivated, and it keeps the computational complexity with the BT model while can handle the cyclic preferences in the same time. Besides, this method can be adapted into the direct preference alignment methods (though may not be directly used in RLHF pipeline with policy-gradient based method e.g., PPO since it is pair based optimization, please correct me if understand wrong).\n3. The experimental results demonstrate the effectiveness of the proposed method in handling the cyclic preferences." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the intransitive (cyclic) preferences problem in reward modeling. Traditional reward models such as BT model and PairRM cannot handle cyclic preferences scenarios. To address this issue, this paper proposes General Preference Optimization (GPM) via embedding the preference information in the latent space (called preference representation learning). The proposed reward model can be applied to common direct preference alignment methods such as DPO and SPPO. The experimental results validate the effectiveness of GPM in handling cyclic preference scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. It may be not appropriate to use Games such as Tic-Tac-Toe, Go, starcraft Elo system to establish the motivation of the cyclic preference. This paper mainly studied the language model preference, i.e., the generated response for the given question. Generally, for the given user question, the preferences of candidate responses satisfies total ordering, which means there will be nearly no cyclic preference cases. Think about in the preference annotation workflow, for the same question, it rarely has the case Response A > Response B > Response C while Response A < Response C, except changing the criteria of evaluating the response. Also, the success of Lmsys Chatbot Arenas system can also validate this point.\n2. The strength of GPM in terms of computational complexity is overclaimed. Starting at L324, the authors claim that they have advantage in computational efficiency. However, the most commonly used BT reward model only has O(K) complexity (K times inference forward), and does require the embedding computation, which means it has better efficiency than the proposed GPM. I hope the authors can revise this paragraph to make it clear.\n3. Actually, the cyclic preference is very rare in the chatbot (LLM generation) scenarios, thus the authors have to establish a specially designed cyclic \"instruction following ≻ honesty ≻ truthfulness ≻ helpfulness ≻ instruction following\" for ultrafeedback dataset. This preference criteria establish needs to be explained and justified. How well it aligns with the overall score? It is also suggested to show provide the accuracy in terms of overall score in Table 1. Thus, the results may not directly validate the effectiveness of the proposed method since the established criteria is not well justified.\n4. The alignment performance of LLMs using GPM is somewhat moderate, showing no surprises. As shown in Table 3 and Table 4, the performance of GPM in the iteration 3 is marginally better than methods using BT model. Notably, for a more fair comparison, we should control the length bias in the preference. The length controlled win-rate results are shown in Table 6 (Appendix), which show that the GPM has no significant advantages in aligning the LLM over BT model. And I suggest the authors to use the LC win rate to show their results in the main body, as LC win rate is more fair." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Can the authors elaborate a bit more on line474-475, \"To avoid the preference bias when using GPT-4-turbo as the evaluator\"? Why GPT 4omini would be better to be used as alpacaeval2.0 evaluator here? \n- Are the results using GPM general longer than baseline? As alpaca 2.0 eval introduced LC WR to mitigate length bias, it could be helpful for the authors to further elaborate why they think WR is a better metric to use (Table 3 in main text) in this case? As shown in appendix Table 6, the gain of the proposed methods on LC WR is smaller." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- This paper is well written and is very clear. \n- This paper provides a novel approach by embeds responses into a latent space to capture intricate preference structures efficiently. The preference representation learning and skew-symmetric operators is innovative and well-suited for addressing limitations of traditional methods" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces the GPM with preference representation learning for efficiently capturing complex human preferences. Traditional models, such as the BT, struggle to represent intransitive preferences and suffer from computational inefficiencies, particularly for tasks with large response sets. \nThe GPM model addresses these issues by embedding responses into a latent space, enabling more expressive preference modeling with linear complexity. This paper also proposes GPO and show gains on policy optimization. \n\nSummary of contributions: \n- This paper proposed GPM, which efficiently models complex, cyclic, and intransitive preferences with linear complexity.\n- This paper demonstrates that GPM outperforms BT on various benchmarks, including RewardBench.\n- Enhanced language model alignment on tasks such as AlpacaEval2.0 and MT-Bench when using GPO (w/ GPM)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- It's very interesting to see GPM can capture the cyclic preferences that previous methods cannot. Further experiments on how \"capture cyclic preferences\" can help existing LLMs to produce better results or show some improvement on downstream applications can demonstrate the true value of this. \n\n- As the authors already mentioned in the limitations section, this paper would benefit from having enough discussion and analysis on representation vector (v) generation (model architecture choice). Without solid ablation study, it's hard to judge whether this method can be generalized. The performance pattern difference on 2b and 8b models also shows that this method may require specific recipe for specific use cases." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How is the von Neumann winner related to GPO?\n2. I am not sure what the bold font in Table 4 represents. Could you clarify?\n3. How is the BT RM trained? From appendix it looks like BT RM refers to the reward model formed by the DPO-trained policy and the reference model, where rewards are calculated as the log-likelihood ratio multiplied by beta. If so, it should be made clear in the main text. \n4. Line 1307 (Appendix B.2): \"For the Bradley-Terry (BT) model, the temperature parameter β was set to 1, following standard practice (Rafailov et al., 2024)\". Can you clarify what \"standard practice\" refers to? It is not standard to set beta = 1 in Rafailov et al. In fact, beta is an important hyper-parameter that needs sweeping. \n5. Could you explain the experimental results on MT-Bench in Table 4 in more detail? Specifically, it seems like the improvement is within the margin from sampling compared to the baseline even with the proposed GPO+GPM." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The proposed GPM approach enjoys several desirable theoretical properties as explained in the Summary. This is a good contribution for preference modeling.\n* The three sets of experiments are well designed. They cover the main research questions nicely formualed on Line 384-389 that are warranted by the proposed GPM and GPO approaches.\n* Experimental results of GPO+GPM as evaluated on AlpacaEval 2.0 shows substantial gain compared to previous SPPO+BT RM approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "* In terms of methods proposed:\n * This paper proposes the GPM approach to model general preference. GPM computes preference probability by multiplying a skew-symmetric block-diagonal matrix (referred to as “Skew-symmetric Preference Operator”) with learned embeddings (referred to as “preference representation”). This approach has several advantages: (1) ensures P(y1>y2) = P(y2<y1) by design, regardless of input positions. (2) has the capacity to model cyclic preference (e.g., A>B>C>A) which the Bradley-Terry model fails at; (3) has linear complexity O(K) in computing preference among a pool of K candidates compared to previous approaches like PairRM which has quaratic complexity O(K^2). \n * This paper proposes the GPO objective to align LLM with preference models. GPO is adapted from the SPPO loss by Wu et al. (2024b) for iterative preference optimization except that the preference score instead of preference probability in the loss form. \n* In terms of experiments conducted:\n * (1) Train and evaluate Reward models on a cyclic preference dataset constructed from UltraFeedback. The aim is to show GPM can model cyclic preferences while BT cannot.\n * (2) Train GPMs using the Skywork Reward Data Collection and evaluate on Reward Bench with two base models (2B & 8B). The aim is to show that GPM outperforms BT-based RM in terms of reward modeling. \n * (3) Finetune a 2B and a 8B model with GPO and SPPO using the BT-based Reward model and the GPM reward model. Evaluate on AlpacaEval 2.0 and MT-Bench. The aim is to show GPO+GPM yields better downstream LLMs.\n* In terms of experimental results:\n * (1) They show that GPM has the capacity to model cyclic preferences as intended.\n * (2) They show that GPM attains overall higher scores on reward bench compared to BT-based RM. The gain with 2B model is substantial (+5.6), while the gain with 8B model is marginal (+1.0). \n * (3) They show substantial gain with GPO+GPM compared to SPPO+BT RM on AlpacaEval 2.0. The results for MT-Bench warrant more discussion than there is in the paper (see Question 5)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* Presentation of methods:\n * The von Neumann winner is introduced in Sec.5 where GPO is proposed, but it is not clear how this concept relates to the proposed GPO. \n * The current presentation of Sec.5 is confusing and redundant. It starts with a general review of previous work, repeats the points on computational efficiency that are already made in the introduction, and finally introduces the GPO method in Sec. 5.2. It seems more appropriate to move the texts up to Sec.5.1 into Background/Related Work to make a clear distinction of previous work and this paper's contribution. \n * Key modeling techniques (Eigenvalue Scale Gate; Eigenvector Embedding Head) are in the Appendix rather than the main text. \n* Clarity in experimental setup and reporting:\n * It seems like the RMs in Table 1 are trained and evaluated on the same dataset. If that's the case, it should be made clear in the main text.\n * It is not clear in the main text how the BT RM is trained (see Question 4). \n * The \"1st\" and \"2nd\" columns in Table 4 lacks explicitly explanation in the caption. \n* Under-discussed experimental results:\n * Table 2, 3 and 4 warrant more thorough discussion. For example, Line 481 reads: “These results suggest that integrating the General Preference representation model (GPM) into policy optimization can enhance the downstream performance of language models”, but Table 4 for MT-Bench shows GPM+GPO yielding marginal gain in MT-Bench score compared to SPPO+BT RM. The results do not support this general claim. \n* Overall, while the paper has good novelty and presents a good set of experimental results, a more detailed, methodical discussion of results is in order. Presentation and clarity in methods could also be improved. The paper would make a much stronger case if these issues are addressed." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I don't understand why the acc of GPM is 100% on cyclic preference data. Does the experiment involve any information or data leakage?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The preference representation learning approach can capture intricate preference structures, addressing the limitations of traditional methods in handling intransitive preferences.\n\n2. By embedding responses into a latent space, the method achieves linear query complexity, making it computationally more efficient than PairPM, which has quadratic query complexity.\n\n3. The proposed method ensures a more consistent preference probability of compared pairs, reducing the ad-hoc nature of PairPM implementations.\n\n4. Extensive experiments on benchmarks and downstream tasks demonstrate the superiority of GPM over existing methods, providing strong empirical evidence for its effectiveness." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a novel approach to modeling human preferences for enhancing the alignment of foundation models with human values. Traditional methods, such as the Bradley-Terry (BT) reward model and supervised pair preference models (PairPM), have limitations in expressiveness, consistency, and computational efficiency. The authors introduce preference representation learning, which embeds responses into a latent space to capture complex preference structures, achieving linear query complexity. They also propose a preference score-based General Preference Optimization (GPO) method, extending reward-based reinforcement learning from human feedback. The experimental results demonstrate that the proposed General Preference Model (GPM) outperforms the BT reward model on the RewardBench benchmark by up to 5.6% and effectively models cyclic preferences. Additionally, evaluations on downstream tasks like AlpacaEval2.0 and MT-Bench show significant performance improvements of up to 9.3% after post-training with GPO and GPM." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The introduction of a latent space and preference representation learning adds complexity to the model, which might require more sophisticated training and tuning processes.\n\n2. The latent space embeddings and preference scores might be less interpretable compared to simpler models, making it harder to understand why certain preferences are modeled in specific ways.\n\n3. While the paper provides comparisons with the BT reward model and PairPM, a more comprehensive comparison with other state-of-the-art methods would strengthen the claims about the superiority of GPM." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024general,\ntitle={General Preference Modeling with Preference Representations for Aligning Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xS4XOS4NQ5},\nnote={under review}\n}" }, "abstract": { "value": "Modeling human preferences is crucial for aligning foundation models with human values. Traditional reward modeling methods, such as the Bradley-Terry (BT) reward model, fall short in expressiveness, particularly in addressing intransitive preferences. Although supervised pair preference models (PairPM) can express general preferences, their implementation is highly ad-hoc and cannot guarantee a consistent preference probability of compared pairs. Additionally, they impose high computational costs due to their quadratic query complexity when comparing multiple responses. In this paper, we introduce preference representation learning, an approach that embeds responses into a latent space to capture intricate preference structures efficiently, achieving linear query complexity. Additionally, we propose preference score-based General Preference Optimization (GPO), which generalizes reward-based reinforcement learning from human feedback. Experimental results show that our General Preference representation model (GPM) outperforms the BT reward model on the RewardBench benchmark with a margin of up to 5.6% and effectively models cyclic preferences where any BT reward model behaves like a random guess. Furthermore, evaluations on downstream tasks such as AlpacaEval2.0 and MT-Bench, following the language model post-training with GPO and our general preference model, reveal substantial performance improvements with margins up to 9.3%. These findings indicate that our method may enhance the alignment of foundation models with nuanced human values." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "preference modeling", "preference optimization", "reinforcement learning from human feedback" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2a6250ddf18fb28f6603889d9b2db16f154bbc3e.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/c116f5099cc3370d95607d6ba61bb41753355814.zip" }, "title": { "value": "General Preference Modeling with Preference Representations for Aligning Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xS6uKkJ9Uz
Detecting Out-of-Context Misinformation via Multi-Agent and Multi-Grained Retrieval
main
Active
Multimodal Machine learning;Multi-modal Large Language Model
applications to computer vision, audio, language, and other modalities
3;5;5
4;4;4
2;3;3
2;3;3
3;3;3
4.333333
4
2.666667
2.666667
3
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Are there examples where accurate judgments are made based on the caption’s time, character, or location? The examples in the paper seem solvable by individuals without background knowledge or the multi-agent support mentioned." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This solution is intuitive and supported by a self-built multi-granularity evidence storage module. This module integrates information at various levels, such as entities and events, providing a basis for detecting and interpreting anomalies. When the evidence storage module is sufficiently large and up-to-date, it will significantly improve the accuracy of OOC detection." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The approach in this paper is feasible for enhancing OOC detection accuracy. However, the work required to build the evidence module may lack originality, potentially limiting its impact on future work in this area." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The core of this work should focus on the accuracy of image-text matching in the evidence stored within the module and the quantity of evidence it contains. Moreover, to ensure high accuracy in OOC detection, the evidence storage module may need constant updates; otherwise, maintaining a high detection accuracy for OOC over time may be challenging. Consequently, this might limit the work’s impact on future OOC research.\n2. In the example of image-text detection shown in Figure 1, the image depicts Cameron wearing fall-winter attire, with the caption stating that Cameron left the High Court on June 14. This discrepancy is quite evident, appearing solvable without extensive background data. Figure 2 revisits this example but it still relies on empirical information for news detection." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No ethics review needed." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "While I believe this paper holds valuable potential for the research community, certain limitations need to be addressed. My suggestions for the authors: \n\nQ1) Evaluate WACAW against real-world out-of-context data, such as the VERITE benchmark (https://github.com/stevejpapad/image-text-verification) to address W1. \n\nQ2) An evaluation on the “Miscaptioned Images” subset of VERITE would also be very valuable to demonstrate that the proposed method is effective even when the text contains inaccuracies, such as altered locations, dates, or individuals. This evaluation could partially address concerns raised in W2. \n\nQ3) Expand the ablation study (Table 2) to include an evaluation of GPT-4o-mini, demonstrating that while the high performance appears primarily due to the capabilities of GPT-4o-latest, the proposed WACAW method consistently enhances performance across models. This could partially address W3. \n\nQ4) The paper would benefit from the inclusion of inference examples, showcasing both correct and mistaken predictions along with their explanations. This could partially address W4. \n\nQ5) Based on the supplementary material, it appears that the “proprietary multi-granularity database” primarily utilizes portions of the NewsCLIPpings dataset as its “event instances” and extracts visual and textual entities for the entity databases. Is this correct? If so, why isn’t this made explicit in the paper? Additional information about the database and its design is necessary, including a clarification on whether the authors have ensured there is no data leakage between the training, validation, and test sets. \n\nQ6) Table 1 could also indicate which of these methods leverage: external information from the web and/or a knowledge database and/or visual/textual entities and/or LVLMs, etc., so as to provide a more fair and informative comparison. Alternatively, you can mention this information in section 4.1.3." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is overall well-written and easy to follow. \nIt presents a novel pipeline for detecting out-of-context misinformation using multiple agents, achieving high performance on the NewsCLIPpings dataset. \nThe ablation study demonstrates the necessity of each agent and component. \nThe method focuses on interpretability which is crucial especially when developing tools intended for use by the general public or journalists." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the challenge of misinformation caused by out-of-context image-text pairs. The authors propose and implement MACAW (Multi-Agent Cross-Modal Misinformation Analysis Workflow), which leverages large vision-language models (such as GPT-4o-latest, GPT-4o-mini, LLaVa-7B and LLaVa-13B), multi-agent and multi-grained retrieval. Specifically MACAW comprises: \n1) A Retrieval Agent tasked with gathering relevant “evidence” (visual entities, textual entities, and events) from a proprietary, pre-constructed database. This agent conducts an initial analysis to identify inconsistencies between the retrieved evidence and its relationship to the image-text pair under review. \n2) A Detective Agent performs a deeper analysis of contextual elements (time, location, individuals, events, and objects) and works to identify any inconsistencies within these aspects.\n3) An Analyst Agent responsible for analyzing the outputs from the previous agents in order to deliver a final verdict and explanation. \n\nThe authors conduct experiments on the NewsCLIPpings dataset, providing a comparative analysis against state-of-the-art (SotA) methods, where the proposed MACAW method with GPT-4o-Latest achieves superior performance. Additionally, an ablation study highlights the contribution of each component, accompanied by an evaluation of the model’s generated explanations and logic." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1) The authors utilize the NewsCLIPpings dataset, which includes ‘out-of-context’ samples that are ‘synthetic’ or algorithmically created by mismatching the original image-text pairs. However, strong performance on ‘synthetic’ data does not guarantee similar results on real-world data. The absence of evaluation against real-world out-of-context misinformation is a notable limitation.\n\nW2) There is no comparison with the current SotA method on NewsCLIPpings such as the “Attentive Intermediate Transformer Representations” (AITR) model [1] which achieves 93.3% on NewsCLIPpings. Moreover, the authors of [1] express concerns regarding the use of the NewsCLIPpings dataset, noting how models can achieve high performance on it by relying on superficial patterns rather than factuality.\n\nW3) While the ablation study demonstrates that each component of the proposed method is important, Table 3 reveals the significance of using the more powerful GPT-4o-latest (92.7%) over GPT-4o-mini, which achieves only 84.6%. This performance is surpassed by previous methods, such as CCN (84.7%) and SNIFFER (88.4%). Consequently, this somewhat diminishes the significance of the proposed method, suggesting that the observed performance improvement is primarily attributable to the more powerful LVLM rather than the method itself.\n\nW4) Table 4 presents the ranking of each model’s explanations and logical consistency. However, this does not necessarily indicate the quality of the explanations, just that they are preferred over the explanations of other methods. \n\nW5) The presentation of competing methods is somewhat superficial, lacking an in-depth discussion of how the proposed method distinguishes itself from other LVLM-based approaches, such as SNIFFER or [2, 3].\n\nReferences\n[1] Papadopoulos, S. I., Koutlis, C., Papadopoulos, S., & Petrantonakis, P. C. (2024). Similarity over Factuality: Are we making progress on multimodal out-of-context misinformation detection?. arXiv preprint arXiv:2407.13488.\n[2] Tahmasebi, S., Müller-Budack, E., & Ewerth, R. (2024, October) Multimodal Misinformation Detection using Large Vision-Language Models. In Proceedings of the 33rd ACM International Conference on Information and Knowledge Management (pp. 2189-2199).\n[3] Geng, J., Kementchedjhieva, Y., Nakov, P., & Gurevych, I. (2024). Multimodal Large Language Models to Support Real-World Fact-Checking. arXiv preprint arXiv:2403.03627." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The evidence source description is vague. How did the authors address the issue of time leakage? Without clear details on data retrieval, how can we be sure the model isn’t accessing information that wasn’t available at the time of the misinformation event?\n\n2. The comparison models are mainly based on CLIP, which has far fewer parameters and less training data than GPT-4o. Are the performance gains from MACAW due to the method itself, or simply the use of a much larger model like GPT-4o? Moreover, given GPT-4o’s massive training data, is there a risk that NewsCLIPpings data was included, leading to data leakage?\n\nI am open to revising my score once the authors address my concerns." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper is well-structured and easy to read.\n2. The \"evidence storage\" in MACAW presents a relatively novel approach. The authors propose multimodel entities that comprise textual entities and image entities with efforts to ensure alignment between the two modalities." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces MACAW, a framework designed to identify out-of-context (OOC) misinformation through three key stages: \"EVIDENCE STORAGE,\" \"EVIDENCE RETRIEVAL, AGGREGATION, AND VERIFICATION,\" and \"MULTI-AGENT DETECTION.\" \n\n1. Initially, MACAW establishes an \"evidence storage\" containing multi-model evidence at the entity level and textual evidence at the event (caption) level. \n\n2. Then, with information that needs to be detected, MACAW retrieves relevant evidence based on similarities. \n\n3. Finally, MACAW employs a multi-agent (multi-step) design approach, prompting GPT-4o to focus on specific steps (Relevance, Temporal, Spatial, Object, and Event) to provide a final judgment gradually. \n\nExperimental results demonstrate that MACAW outperforms other approaches on the NewsCLIPpings dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Vague Motivation: The motivation presented in this paper lacks clarity. The authors claim: \"We developed the first multi-agent OOC detection system that integrates multi-granularity information, mirroring the real-world collaboration among human experts.\" However, as far as I know, human experts responsible for misinformation detection typically make independent judgments and reach a final consensus through discussion, rather than each expert being responsible for only a single part of the detection process. In contrast, MACAW assigns each agent responsibility for only one specific step of the detection process. This distinction seems misaligned with how human experts collaborate. \n https://www.politifact.com/article/2018/feb/12/principles-truth-o-meter-politifacts-methodology-i/\n\n2. Lack of Literature Review: The paper overlooks some relevant works that focus on enhancing explainability, which should be acknowledged. This includes, but is not limited to:\n - Interpretable Multimodal Misinformation Detection with Logic Reasoning (Findings of ACL 2023)\n - Interpretable Detection of Out-of-Context Misinformation with Neural-Symbolic-Enhanced Large Multimodal Model (Findings of ACL 2023)\n - Interpretable Multimodal Out-of-context Detection with Soft Logic Regularization (ICASSP 2024)\n\n3. Insufficient Details on Data Storage: The paper lacks sufficient detail on how the data storage is constructed. What are the data sources? Does it retrieve information from the internet? If so, have the authors accounted for potential time leakage during retrieval? For instance, some misinformation may have already been flagged as fake news by the time it's retrieved from the web. This could introduce unintended biases, particularly when handling older misinformation cases.\n\n4. Incomplete Experimental Reports: The experimental results are incomplete. Table 1 only shows results on the Merged/Balance subset of the NewsCLIPpings dataset. What about other subsets, such as Text-Image, Text-Text, Person-Matching, and Scene-Matching? A broader range of experiments would give a clearer picture of the strengths and weaknesses of MACAW framework.\n\n5. Unreasonable Explainability Analysis: Section 4.4.2, Explainability Analysis, is insufficient. The paper compares different backbone models within MACAW framework, but it should instead provide a comparison with other OOC detection approaches in terms of explainability. The current comparison only shows how GPT-4o performs better than other models, which is not enough to assess the explainability of the system. Additionally, incorporating more qualitative analysis in this section would further strengthen the evaluation." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This paper proposes a novel multi-agent approach with multi-grained retrieval for out-of-context misinformation detection." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024detecting,\ntitle={Detecting Out-of-Context Misinformation via Multi-Agent and Multi-Grained Retrieval},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xS6uKkJ9Uz},\nnote={under review}\n}" }, "abstract": { "value": "Misinformation remains a critical issue in today's information landscape, significantly impacting public perception and behavior. Among its various forms, out-of-context (OOC) misinformation is particularly pervasive, misrepresenting information by repurposing authentic images with false text. Traditional OOC detection methods often rely on coarse-grained similarity measures between image-text pairs, which fall short of providing interpretability and nuanced understanding. Conversely, whereas multimodal large language models (MLLMs) exhibit vast knowledge and an inherent ability for visual reasoning and explanation generation, they remain deficient in the complexity required to understand and discern nuanced cross-modal distinctions thoroughly. To address these challenges, we propose MACAW, a retrieval-based approach that indexes external knowledge, focusing on multiple granularities by extracting and cataloging relevant events and entities. Our framework first extracts multi-granularity information to assess the contextual integrity of news items, followed by a multi-agent reasoning process for accurate detection. Extensive experiments demonstrate the robustness and effectiveness of our proposed framework in identifying out-of-context fake news, outperforming the state-of-the-art solutions by {\\bf 4.3\\%}." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multimodal Machine learning", "Multi-modal Large Language Model" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2e3bcdbedcd7cd9fc022a4e0ef91aff0840c59d1.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/4bb544d7d62a92c6a3f7496531af1d6c9e08e130.zip" }, "title": { "value": "Detecting Out-of-Context Misinformation via Multi-Agent and Multi-Grained Retrieval" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xSOl0s1u77
TC-Bench: Benchmarking Temporal Compositionality in Conditional Video Generation
main
Active
Video Generation Benchmark; Text-to-Video Generation; Compositional Video Generation
datasets and benchmarks
3;5;5;6
4;4;4;4
3;3;3;2
2;3;2;3
3;3;3;2
4.75
4
2.75
2.5
2.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How many prompts and ground truth videos are there in total, specifically for T2V and I2V? Please provide a chart to present this more clearly.\n2. How is the topic distribution of the prompts considered? It is recommended to provide a specific diagram categorizing the topic types." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The temporal compositionality proposed in the article is a significant evaluation aspect of video generation that was not addressed in previous benchmark papers. The proposed benchmark can serve as a supplement to current video generation evaluations, promoting advancements in the field of video generation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper focuses on assessing the temporal compositionality of video generation models, featuring carefully designed text prompts and ground truth videos, along with two robust evaluation metrics: TCR and TC-Scores. The benchmark is important as it better reflects the temporal dynamic performance of videos compared to previous benchmarks for video generation. The article presents three scenarios for temporal compositionality and conducts extensive baseline methods, ranging from direct T2V models to I2V models. This benchmark provides new perspectives for evaluating and improving video generation tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Regarding the evaluation of temporal compositionality, the article proposes three scenarios. Whether these scenarios are comprehensive and cover all possibilities requires further discussion and analysis.\n2. The article lacks more detailed descriptions regarding prompt design, such as topics and length distribution." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "n/a" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The focus on temporal compositionality in video generation is both novel and important, given the rapid advancements in conditional video generation. The benchmarks and quantitative experiments presented are valuable additions to the field.\n\n2. The methodology is robust, featuring comprehensive experiments with detailed explanations, facilitating replication and further study.\n\n3. The paper is well-written, structured for clarity." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes TC-Bench, a new benchmark designed to assess the Temporal Compositionality of video generation models. TC-Bench is divided into two components: TC-Bench-T2V, which includes 150 prompts for evaluating Text-to-Video (T2V) models across a spectrum of attributes, actions, and objects, defining the initial and final states of scenes; and TC-Bench-I2V, comprising 120 prompt-video pairs that serve as ground truth videos and reference data for Image-to-Video (I2V) models. The metrics introduced in this study demonstrate a significant correlation with human judgments, enhancing the evaluation of temporal compositionality." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The sizes of the test sets for both T2V (150) and I2V (120) benchmarks are relatively small, which may limit their ability to comprehensively analyze the capability for temporal compositionality.\n\n2. There is a lack of analysis on the distribution of the test sets, raising concerns about their representativeness of real-world scenarios relevant to the tasks.\n\n3. The evaluation is limited to only two methods, SEINE and DynamiCrafter, within the I2V category. This might not provide a full perspective on the field, given the variety of available I2V methods.\n\n4. The influence of the structure and length of the input prompts on video generation quality is a critical aspect that remains unexamined, which could impact the effectiveness of the benchmarks." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Does the benchmark account for scenarios where multiple factors change simultaneously? For instance, transitioning from a yellow person sitting in a green car by the seaside to a yellow dog sitting in a yellow car in the desert. Can the evaluation method address such cases?\n\nIn the context of attribute transitions, could variations in materials and textures lead to changes in object ID evaluation?\n\nMight changes in video framing also affect the accuracy of the VLM (Vision-Language Model) or evaluation metrics?\n\nThe supplementary materials lack a README file, making it unclear how to interpret the evaluation results of \"eval_results\" folder." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper presents a highly specialized benchmark that addresses an editing problem currently not well-solved by existing text-to-video (T2V) and image-to-video (I2V) approaches. Its contribution is valuable, as it not only provides data but also includes evaluation metrics.\n\nThe writing of the paper is clear and well-structured, facilitating understanding of the proposed methods and results.\n\nThe evaluation metrics demonstrate alignment with human choices, reinforcing the relevance and applicability of the benchmark." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work introduces a benchmark for Temporal compositionality, addressing a previously overlooked and more challenging aspect of temporal compositionality in video generation. The proposed benchmark focuses on three key factors: attribute transitions, changes in object relationships, and background shifts. To evaluate these factors, the paper presents an evaluation metric and a generative method that demonstrates promising results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The Kling method could potentially be improved by incorporating first and last frames. I am curious whether this powerful commercial T2V/I2V model, when utilizing first and last frames alongside prompts, could resolve this issue, as it relates directly to the solution of the problem.\n\nThe length of the videos is relatively short, with most being under 5 seconds, which feels insufficient. Additionally, the number of videos is limited, with only 50 available for each scenario.\n\nFigure 4 does not compare all methods. While it appears that Kling demonstrates strong instruction-following capabilities regarding object relationship changes, this is not reflected in the numerical values in Table 1. Are there any examples of bad cases available? The Gen3 model seems to perform well with background shifts. It appears that attribute transitions are the most challenging aspect. Thus, would incorporating first and last frames help mitigate this issue?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see my questions in the weakness part." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-written and easy to follow.\n2. The paper mitigates the shortcomings of existing video generation benchmarks, e.g., evaluating the generation quality of temporal dimension. \n3. The paper conducts extensive evaluations of state-of-the-art video generation methods.\n4. The authors implement a simple baseline to improve the quality." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a benchmark for evaluating video generation in temporal dimensions. The evaluation is composed of three dimensions: attribute transition, object relation change, and background shifts. VLMs (including GPT-4) are employed to get the quantitative results. The paper also contributes a simple baseline to improve the performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The evaluation setting is still limited to short video generation. In long video generation, there are more than two attribute and relation changes. In the current setting, only binary state changes are considered, which hampers the extension to long video generation.\n2. The evaluation relies heavily on the capability of VLMs. For example, the order of the results in Table 1 and Table 6 is inconsistent. If the method relies on GPT4, the cost of GPT4 hampers the wide usage of the proposed benchmark." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a new benchmark suite to evaluate temporal compositionality for conditional video generation" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024tcbench,\ntitle={{TC}-Bench: Benchmarking Temporal Compositionality in Conditional Video Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xSOl0s1u77},\nnote={under review}\n}" }, "abstract": { "value": "Video generation has many unique challenges beyond those of image generation. The temporal dimension introduces extensive possible variations across frames, over which consistency and continuity may be violated. In this study, we move beyond evaluating simple actions and argue that generated videos should incorporate the emergence of new concepts and their relation transitions like in real-world videos as time progresses. To assess the \\textbf{T}emporal \\textbf{C}ompositionality of video generation models, we propose TC-Bench, a benchmark of meticulously crafted text prompts, corresponding ground truth videos, and robust evaluation metrics. The prompts articulate the initial and final states of scenes, effectively reducing ambiguities for frame development and simplifying the assessment of transition completion. In addition, by collecting aligned real-world videos corresponding to the prompts, we expand TC-Bench's applicability from text-conditional models to image-conditional ones that can perform generative frame interpolation. We also develop new metrics to measure the completeness of component transitions in generated videos, which demonstrate significantly higher correlations with human judgments than existing metrics. Our comprehensive experimental results reveal that most video generators achieve less than ~20% of the compositional changes, highlighting enormous space for future improvement. Our analysis indicates that current video generation models struggle to interpret descriptions of compositional changes and dynamically map varied semantics across different time steps." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Video Generation Benchmark; Text-to-Video Generation; Compositional Video Generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/7f8f685e2a00166cd013ca39bb988d26a6d6b16f.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/e03d3a33aca0cdba2996ed4a1ab1550bc6d03314.zip" }, "title": { "value": "TC-Bench: Benchmarking Temporal Compositionality in Conditional Video Generation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xSSo8kCA9G
FRUGAL: Memory-Efficient Optimization by Reducing State Overhead for Scalable Training
main
Active
Memory-efficient training;Optimization;Full-Rank Update;Large Language Models;LLM;Pre-training;Fine-tuning
optimization
5;5;5;5;6
3;4;3;4;2
3;3;3;2;3
2;2;3;3;4
2;2;1;2;3
5.2
3.2
2.8
2.8
2
-0.801784
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "GaLore theoretically prove that gradient is low-rank and a study in BlockLLM (https://arxiv.org/pdf/2406.17296) show that only a few parameters are updated during the training. A few other recent works also seem to suggest that the low rank structure exists in the network. But this paper seems to suggest the opposite. Do you see a space where these two ideas coexist? For example, low rank for certain tasks vs full rank for other tasks? \n\nMinor:\n- Introduce abbreviations for better readability. For example SGD as Stochastic Gradient Descent. \n- Missing references Adam-mini and BlockLLM" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The combination of state-free optimizers with advanced ones, like SGD and Adam, for memory efficient training is a novel idea.\n2. The empirical results show that FRUGAL does better than other methods in terms of memory use and perplexity,\n3. The paper includes sufficient ablation studies and it helps to see how FRUGAL works in different situations and settings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The work proposes a memory efficient training method called FRUGAL which is essentially a combination of full-rank updates with gradient splitting. The authors partition the parameters and update using advanced optimizers (like Adam) for low-dimensional updates and state-free methods (like SGD or signSGD) for remaining directions. Additionally, the authors provide theoretical convergence guarantees and validate FRUGAL’s effectiveness through experiments on models like LLaMA." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Line 249 introduces state-free and stateful parameters but could provide more explicit explanation on the selection criteria. Are parameters randomly selected to each category? In that case the assumption is all the parameters are equally important for that iteration. The work could benefit from more detailed study on how to choose the parameters for state free updates. \n\nThe purpose of the density parameter ($\\rho$) is not thoroughly explained, especially in relation to zero-density training. Please clarify whether zero-density training implies all parameters are state-free (i.e., trained exclusively with SGD). The selection of $\\rho$ is not mentioned in the algorithm as well." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- 1) Including more experiments comparing the method with various stateful and stateless optimizers would enhance the paper. \n\n- 2) Testing models with larger sizes (e.g., 3B and 7B) could further demonstrate the generalizability of the proposed method. \n\n- 3) Please clarify the reasons for selecting the specific optimizers in the theoretical section. They appear restrictive and differ from those used in the main algorithm. Additional details and guarantees would help generalize this proof. \n\n- 4) While it’s mentioned that stateless optimizers typically underperform with transformer architectures, the paper doesn’t explain why FRUGAL with $\\rho=0$ achieves optimal performance in certain scenarios. Providing more details and comparisons would clarify this.\nExpanding the dataset and incorporating diverse architectures could strengthen the argument for FRUGAL's superior characteristics." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "-1) The paper presents a novel approach to improving memory efficiency while performing updates using full-rank information. \n-2) The proposed method is flexible, supporting various choices for both stateful and stateless optimizers as well as different projection methods. It offers convergence guarantees for FRUGAL within a specified framework and consistently outperforms existing memory-efficient algorithms, such as GaLore and BAdam, achieving performance levels close to the memory-intensive Adam optimizer. \n-3) Additionally, the paper provides valuable insights into the learning dynamics of transformer models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces FRUGAL (Full-Rank Updates with GrAdient spLitting) that reduces memory consumption by splitting gradient updates into two subspaces. A *state-full* subspace is updated using advanced optimization algorithms like Adam, while a *state-free* subspace is updated using stateless and memory-efficient methods like SGD or signSGD. The framework allows for a flexible choice of optimizers and projection methods. FRUGAL achieves state-of-the-art results in pre-training and fine-tuning tasks, outperforming existing memory-efficient algorithms while maintaining a similar memory budget." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-1) The paper's structure would greatly benefit from a clearer organization. Currently, some analysis and experimental results appear within the Methods section, which disrupts the logical flow and makes it challenging for readers to follow the methodology. Reorganizing the paper and dedicating specific sections to distinct aspects of the research could significantly enhance readability and impact.\n\n-2) Several notations (e.g., g~) are introduced without proper definitions, which assumes too much prior knowledge from readers. Additionally, concepts like smoothness and unbiasedness are only vaguely referenced and would benefit from clearer definitions. The theory section should be expanded to explicitly define each notation and assumption, as well as to contextualize them within a more general setting relevant to the proposed method.\n\n-3) Including a full-parameter fine-tuning baseline in Table 4 would provide a valuable benchmark, offering a clearer context for evaluating the results.\n\n-4) Definitions for Full-Rank SVD/Random and Low-Rank SVD/Random are scattered across Table 1 and lack clear differentiation. Consolidating these explanations into a concise paragraph would improve clarity and reader comprehension.\n\n-5) Lastly, there are deviations from the primary algorithm, such as using column-wise projection instead of block-wise projection. For completeness, it would be beneficial to include results using the original proposed approach alongside the variations in the experiments.\n\n-6) By solving this issues in the revision, especially following a more structured writing style and lowering the jumps, the paper would definitely level up." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "**Formal Definitions of Full and Free States:** Could the authors provide formal definitions of \"full\" and \"free\" states as used in the method? A clearer understanding of these terms would improve the paper’s theoretical foundation.\n\n**Main Limitations:** What are the primary limitations of this approach? A discussion on the constraints or situations where the method might be less effective would help clarify its scope and potential trade-offs.\n\n**Running Time Comparisons:** Beyond memory efficiency, how does the method’s running time compare to that of other baseline approaches? Performance in terms of speed is crucial for practical deployment, so direct comparisons would provide a more complete picture of the method’s efficiency." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "**Well-structured Presentation:** The paper is well-structured and easy to follow, with a clear presentation of concepts and methodology.\n\n**Practical Impact:** The method is straightforward to implement and has broad applicability, making it valuable for practical use in various settings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel memory-efficient optimization method. Unlike other state-of-the-art approaches, such as LoRA and GaLore, that have low-rank updates, this method maintains a full-rank update structure. The experimental results demonstrate its superior performance, highlighting its potential advantages in both efficiency and effectiveness over competing methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Lack of Discussion on Limitations:** The paper would benefit from a discussion of the method's limitations and potential failure modes. Addressing these aspects would provide a more balanced view of the approach's applicability and constraints.\n\n**Vague Terminology:** Given the importance of \"state-full\" and \"state-free\" in the proposed method, the paper should offer clearer definitions of these terms. Precise terminology is essential to fully understand the mechanics and implications of the approach." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to the weakness" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Plenty of experiments are conducted to evaluate FRUGAL where FRUGAL demonstrates significant improvments against GaLore.\n\n- Both empirical and theoretically justification are provided to validate the effectiveness of FRUGAL." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposed a new memory-efficient training methods that allows part of the parameters being optimization with optimization states within a compact space while other parameters are optimizated in the original space without optimization states. Results on serveral pre-training and fine-tuning tasks demonstrates the effectiveness of the proposed methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The GLUE benchmarks is little bit outdated, more recent tasks like common-sense reasoning, mt-bench would further improve this work.\n\n- Is there any explanations about which part of the parameters can be directly optimized with SGD type optimizer with other requires adam and why?\n\n- For $\\rho=0$ in Table 2, is it equals to fully optimized with SGD? Does it controdict with recent works that demonstrates that transformers can not be effectively optimzied with SGD? [1]\n\n- The concepts of state-full and state-free subspace in line80/82 is hard to understand, it's better to formally define these two concepts. \n\n- line 192: \"Surprisingly, we found that although SVD decomposition delivers an initial boost, subsequent training with random projection yields significant improvements\", this sequence make it a little bit confusing whether the \"Low-rank Random\" in Table 1 is training of entire random projection or first SVD and later random.\n\n- it's better to define the meaning of K in the inputs of algorithm 1, as well as s.\n\n\n[1] Why Transformers Need Adam: A Hessian Perspective" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I don't have other questions. All major weaknesses are listed above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. FRUGAL's convergence rate is provided and it can recover the rate of standard SGD(M). \n\n2. The experiment execution is strong and the results are convincing. The hyperparameter details are well disclosed and the implementation is provided." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces FRUGAL. The fundamental idea is that during the backward pass, we will take a subset of parameters (a block) to perform stateful Adam updates and for the rest parameters (with blockwise selection) or the gradient residuals (with low-rank gradient projection), we use stateless signSGD updates. The memory efficiency of FRUGAL is achieved by reducing the optimizer states. The authors provide a convergence rate similar to SGD momentum's usual rate under nonconvex optimization. The authors also perform experiments with Llama pretraining on C4 and RoBerta-base fine-tuning on GLUE tasks. The baselines are primarily Galore and BAdam." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Major concern**:\n\n1. The idea of FRUGAL is fairly simple (as a combination of signSGD, Adam, and a gradient projector) but the empirical and theoretical support behind FRUGAL is not solid enough. FRUGAL's stateful optimizer is basically either Galore or BAdam. The main contribution is therefore stateless optimizer part (signSGD), and such effectivenss relies on the finding that stateless optimizers are sufficient to optimize most parameters in LLM (linear weight matrices). The authors only provide a single ablation study in Table 3 without further empirical or theoretical insights on the stateless optimizer part. This evidence alone is not convincing enough on an assured generalization to other non-Llama architectures. So it appears to me that the contribution of this paper is insufficient for an ICLR paper. \n\n2. The motivation (Figure 2) of FRUGAL is that low-rank gradient projection is similar, and random or blockwise selection can cover the whole space. Figure 2 justifies that the top gradient directions across timestep is similar, but *is insufficient to show that random or blockwise selection is always/necessarily better. It is highly likely that after a certain threshold, the role of randomly selected parameters/blocks of parameters have worse performance than top gradient directions. An ablation study on projector type versus stateful optimization density $\\rho$ is definitely needed.\n\n**Minor concern**:\n\n1. The presentation of the Algorithm needs to be clearer. It is hard to understand the exact algorithm (which is actually simple) in the first time of reading Algorithm 1 and Section 3.\n\n\nI consider the first major weakness as critical and I would vote for a borderline reject score at this moment." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We create a memory-efficient optimization framework that performs full-rank updates by combining advanced methods like Adam with state-free methods like signSGD." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024frugal,\ntitle={{FRUGAL}: Memory-Efficient Optimization by Reducing State Overhead for Scalable Training},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xSSo8kCA9G},\nnote={under review}\n}" }, "abstract": { "value": "With the increase in the number of parameters in large language models, the process of pre-training and fine-tuning increasingly demands larger volumes of GPU memory. A significant portion of this memory is typically consumed by the optimizer state. To overcome this challenge, recent approaches such as low-rank adaptation (LoRA (Hu et al., 2021)), low-rank gradient projection (GaLore (Zhao\net al., 2024)), and block-wise optimization (BAdam (Luo et al., 2024)) have been proposed. However, in all these algorithms, the effective rank of the weight updates remains low-rank, which can lead to a substantial loss of information from the gradient. This loss can be critically important, especially during the pre-training stage. In this paper, we introduce **FRUGAL**; (**F**ull-**R**ank **U**pdates with **G**r**A**dient sp**L**itting),, a new memory-efficient optimization framework. The framework leverages gradient splitting to perform low-rank updates using advanced optimization algorithms (such as Adam), while updates along the remaining directions are\nexecuted via state-free methods like SGD or signSGD. Our framework can be integrated with various low-rank update selection techniques, including GaLore and BAdam. We provide theoretical convergence guarantees for our framework when\nusing SGDM for low-rank updates and SGD for state-free updates. Additionally, our method consistently outperforms concurrent approaches across various fixed memory budgets, achieving state-of-the-art results in pre-training and fine-tuning\ntasks while balancing memory efficiency and perplexity targets." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Memory-efficient training", "Optimization", "Full-Rank Update", "Large Language Models", "LLM", "Pre-training", "Fine-tuning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b5710ac0833746e8414fe464915bfdcb6c3ea79c.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "FRUGAL: Memory-Efficient Optimization by Reducing State Overhead for Scalable Training" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xTrAA3UKPa
SWGA: A Distributed Hyperparameter Search Method for Time Series Prediction Models
main
Active
Machine Learning;Deep Learning;Time Series Prediction;Hyperparameter Search;Genetic Algorithms
learning on time series and dynamical systems
1;1;3;3
3;4;4;4
1;1;2;2
1;1;1;1
2;1;2;2
2
3.75
1.5
1
1.75
0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. SWGA introduces a combination of genetic algorithms with a sliding window approach tailored specifically for time series forecasting, addressing the distribution shift and non-stationarity challenges unique to this domain.\n\n2. The authors have provided detailed computational procedures by providing multiple algorithm boxes and demonstrated their impact on real-world applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces the Sliding Window Genetic Algorithm (SWGA), a distributed hyperparameter optimization method designed for time series prediction models. Key contributions include (1) a configurable sliding window technique that mitigates overfitting from distribution shifts typical in time series data, (2) a warm-up stage employing Bayesian optimization to establish a robust initial population, and (3) compatibility with distributed computing." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper writing has too much redundancy that can be simplified or moved to the appendix. It is also not clear why the proposed method is distinguished from other hyper-parameter optimization approaches.\n\n2. In the experiment section, there is also a lack of comparison with other hyperparameter optimization methods specifically designed for time series data.\n\n3. The SWGA algorithm’s performance might be sensitive to parameters like window size, population size, and mutation rates. However, the paper lacks an exploration of how these parameters impact outcomes." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* What is the justification for using 3 tree-based models (CatBoost, LightGBM, XGBoost), 1 recurrent model (LSTM), and 1 attention-based model (Transformer) in the experiments? My intuition is that this hyperparameter search would primarily benefit statistical models such as SARIMAX, ETS, etc., which cannot be trained using a traditional k-fold cross-validation approach. At least one of these model types should be included in the experiments.\n* Why didn’t you compare the proposed algorithm with commonly used hyperparameter search algorithms in the literature? Do you believe that comparison with the traditional genetic algorithm is sufficient?\n* In the experiments, the proposed algorithm is only compared with the traditional genetic algorithm. It would be beneficial to evaluate other hyperparameter search techniques, such as the classic sliding window search and k-fold validation technique." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The algorithm is parallelizable, significantly decreasing the computation time required to find the optimal set of hyperparameters.\n* Evaluation is conducted on a sufficient number of widely known real-life datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a hyperparameter search process specifically tailored for time series forecasting, focusing on temporal distribution shifts in the data. The proposed method is based on a variation of the genetic algorithm and the sliding window validation technique. The algorithm is designed to be parallelizable, which helps further reduce the time needed for hyperparameter search." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* In the experiments, the proposed algorithm is only compared with the traditional genetic algorithm. It would be beneficial to evaluate other hyperparameter search techniques, such as the classic sliding window search and k-fold validation technique.\n* The models evaluated in the experiments lack diversity. There are 3 tree-based models (CatBoost, LightGBM, XGBoost), 1 recurrent model (LSTM), and 1 attention-based model (Transformer).\n* The authors mention high computational costs and inefficient exploration of large hyperparameter search spaces as disadvantages of commonly used techniques. However, the experiments only involve accuracy comparisons. Wouldn’t it be beneficial to demonstrate that the proposed hyperparameter search technique converges more quickly?\n* In Appendix A.1, search spaces are provided for the DLinear and PatchTST models, which are not included in the experiments.\n* The proposed algorithm combines variations of three widely used search techniques from the literature with minimal modifications. The novelty does not seem satisfying." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "See the weakness section." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "* It’s a relevant problem for time series forecasting.\n* It shows some potential for the sliding window strategy." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a generic algorithm (GA) based HPO method that aims to find good hyperparameters assuming the distribution of observations change over time. The way the proposed method works is to sequentially go through the data, as a sliding window (SWGA), and use the springs and top performing configurations as the starting population for the next iteration. The authors also propose to use TPE instead of Random Search to fill the initial population." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The contribution is minor. I can’t agree that using TPE instead of Random Search is an actual contribution. To fill the initial population, doing a small HPO with TPE should have better results than random configurations. It’s too obvious. Also, to show the sliding window is a better HPO strategy, it needs to first deliver better quality and second is cost effective. While the authors’ experiments (Table 1 & 2) show SWGA has better quality than GA, I am not sure the baseline of GA is properly implemented due to lacking of detail. Also, there are tons of HPO methods that can be combined with the sliding window and they can be used to show sliding window is indeed helpful.\n\n- The technical correctness is hard to access given the current state. Many important details are missing. For example: \n - How do the authors ensure that the only baseline GA is comparable with the SWGA? For me, to see if the sliding window helps, taking Figure 1 as an example, it should be a single GA on the average performance of 12 splits. Assume population size is M, then SWGA takes trains M*12 times and the single GA baseline also trains M*12 times.\n - Line 333: “we use seven historical timesteps to predict one timestep ahead” Does this mean the authors sample segments of size 8 from training and validation set? How many are sampled?\n\n- The experiments, especially Table 3, does not support the contribution.Table 3 only shows HPO helps, not why sliding window or warm start is a good strategy. The part of scalability does not fit into the current story. The contribution of the paper, as claimed by the authors, are the sliding window and warm up. The focus is not distributed training or scheduling etc.\n\n- The terminology and notations are not precise and conventional. For example:\n - Line 35 ”the model can achieve better performance on out-of-sample data with a matching distribution” What does out-of-sample data with a matching distribution mean? It’s also strange that the authors call the prediction window as out-of-sample data. It may or may not be out-of-sample.\n - Line 141, please check notation, many misusages." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I don't have any questions for the authors" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- the paper is clear and concise, though it presents some nits reported below\n - the method is applied on multiple datasets and on a variety of different models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel hyperparameter scheme based on genetic algorithms and Bayesian optimization. The proposed approach addresses the distribution shift induced by the nature of time series with a sliding window approach. The approach is applied to time-series prediction on multiple datasets and various models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Important weaknesses:\n - the contributions are strictly practical, thus it is of fundamental importance that the method is accessible to researchers and reproducible. Particularly, ICRL strongly encourage to add a “Reproducibility Statement” (https://iclr.cc/Conferences/2025/AuthorGuide), and the paper is missing it. Furthermore, no code is provided as supplementary material, making the results hard to reproduce and inaccessible\n - table 1 shows that it’s the TPE addition that outperforms the other method, but no experiment shows that the proposed method is better than just TPE.\n - results are not reported with confidence intervals. Thus, there is no way to check if the results are just due to variance.\n - No information is given on the training procedure of the GA counterpart. Furthermore, multiple GA algorithms are present in the literature, but it’s never reported which is the one that is applied for the results.\nMinor weaknesses\n - line 178, TPE is criticized for its ability to scale badly with dimensionality, though the paper uses it to initialize the population, inheriting its downside\n - the 4th contribution point is pointless, given that no code is provided as supplementary material\n - the 2nd contribution point states that the proposed approach should address the distributions shift induced by the nature of time-series. However, nowhere in the paper this is investigated, and specifically, nowhere is addressed what are the other options apart from SWGA and why they should fall short in time series predictions\n - in the “conclusion” section, it is stated “Additionally, we also demonstrate the good scalability of SWGA”. However, no reference to other algorithms/approaches is given. Thus, with a lack of baselines, it’s completely irrelevant\n\nMy recommendation is to reject the paper.\nThe main reasons behind this opinion are:\n - the contributions are minors, and they can be summarized in a smart initialization of the GA using TPE, and a sliding window training approach.\n - the lack of reproducibility of the results\n - the narrow comparison with other approaches.\n———————————————————————————\nWriting concerns:\n - citations should be between parenthesis if not part of the main text (e.g. line 39-40 and then the rest of the paper, respectively use \\citep for parenthesis and cite for normal citation)\n - line 68, TPE was never defined (will be defined later, though it should be defined on the first usage)\n - “But, their full” line 107\n - “K-fold cross-validation effectively reduces the risk of overfitting”: how? It’s a validation method, not a regularization\n - The equation on line 140 is not numbered, and it contains 2 “i” indexes. Please use different letters to avoid confusion\n - line 161 “while domain adaptation is to”\n - line 258 263, the pseudocode contains a line break that is irrelevant\nPersonal opinions:\n - contribution 2 and 3 are not contributions, but positive aspects of the method, maybe better to write them outside of the bullet points\n - the first two paragraphs in 4.1 are repetitive, and the explanation on how the GA works should be part of the background section" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024swga,\ntitle={{SWGA}: A Distributed Hyperparameter Search Method for Time Series Prediction Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xTrAA3UKPa},\nnote={under review}\n}" }, "abstract": { "value": "We propose a distributed hyperparameter search method for time series prediction models named SWGA (Sliding Window Genetic Algorithm). Compared to current genetic algorithms for hyperparameter search, our method has three major advantages: (i) It adopts a configurable sliding window mechanism to effectively combat overfitting from distribution shifts inherent in time series data. (ii) It introduces a warm-up stage using Bayesian optimization-based methods to generate a good initial population. (iii) It supports distributed hyperparameter search across multi-node computing clusters, enhancing both scalability and efficiency. To demonstrate SWGA's efficacy, we conduct hyperparameter search experiments on time series datasets from various domains. The experiment results show that our method consistently finds a hyperparameter configuration that achieves better performance on out-of-sample time series data compared to the traditional genetic algorithm. On average, it reduces the out-of-sample loss by about 56.1%." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Machine Learning", "Deep Learning", "Time Series Prediction", "Hyperparameter Search", "Genetic Algorithms" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c12fd8c14f13113425e357c1cad012aea48a0fc5.pdf" }, "presentation": null, "primary_area": { "value": "learning on time series and dynamical systems" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "SWGA: A Distributed Hyperparameter Search Method for Time Series Prediction Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xTsvE8gOPT
Tradiffusion++:Hierarchical Guidance for Fine-Grained Trajectory-Based Image Generation
main
Active
Diffusion models; Trajectory control; TraDiffusion++; Training-free methods; Controllable generation; Stable Diffusion (SD); Fine-Grained Control
generative models
3;5;5;6
5;3;5;4
2;3;3;2
1;2;2;3
2;3;3;4
4.75
4.25
2.5
2
3
-0.4842
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "The detailed questions are listed in the weaknesses section, but here is the summary:\n\n- Can the author provide results (even a small-scale one is ok) for comparisons with existing baselines for more results on Tab. 1?\n- Inclusion of more diverse qualitative examples." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- **Good presentation.** The paper is pretty well-written and easy to follow with qualitative figures and motivations introduced for each component in the paper. Each component seems reasonable contributions and innovations over the main baseline (i.e., TraDiffusion), which justifies the technical contributions of the paper.\n- **Good potential applications.** Existing work mostly focus on controlling Diffusion models to generate images following semantic labels, depth, and canny edges. These representations tend to require more costly labor to acquire compared to scribbles. Hence, this line of research has potential practical value in real applications.\n- **Good ablation experiments.** Extensive quantitative and qualitative evidence are used to justify the contributions of each individual module in the paper, which again shows the improvement over TraDiffusion for this paper to be a standalone work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Tradiffusion++ proposes several techniques to amend issues in the Tradiffusion paper. The paper focuses on a single style of control mechanism (via scribbles) for training-free controlled generation of diffusion models. Compared to Tradiffusion, the paper proposes three types of guidances under different resolution and stages of the Diffusion models. Qualitative figures are provided to support the claims of effects of each proposed components." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Insufficient experiments.** The main weakness of this version of the paper is the lack of comparisons with existing work. The main table (Table. 1) only compares the paper to TraDiffusion and plain stable diffusion, but no other types of controllable diffusion generation methods. There are immediate questions of how TraDiffusion++ compare to these baselines\n - Control methods that require training such as ControlNet and ControlNext. Can these methods be adapted for scribble-based generation?\n - Training-free methods such as [A] support generation with bounding boxes. Can they be adapted? Even if the answer is not, one very intuitive baseline is to use a bbox to bound the scribbles, and run the generation with the bounding boxes.\n- **More diverse generation examples.** Though the proposed method seems generic and the paper includes many qualitative samples. They focus primarily on generation of scenes involving animals. Inclusion of other objects may be interesting.\n\n\n[A] Chen, Minghao, Iro Laina, and Andrea Vedaldi. \"Training-free layout control with cross-attention guidance.\" Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See the weakness parts. Besides, since it is a training-free method, I think it is very necessary for the authors to conduct more experiments on better diffusion models. I do not think the extra cost would be high." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The method is a simple and effective training-free method, which is quite good. The paper is also well-written and easy to understand. Experiment results also show that it indeed improves the previous method especially comparing to Tradiffusion." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce TraDiffusion++, which incorporates hierarchical guidance to enable fine-grained, trajectory-based image generation. This approach significantly improves upon the previous TraDiffusion model, particularly in terms of shape and layout accuracy. Compared to other kinds of methods like ControlNet, it is a training-free method so can reduce the need for extensive training. Furthermore, TraDiffusion++ supports not only single-object generation but also multi-object generation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I mainly have concerns of the experiment comparison and the discussion of a series of training-free methods.\n\nExperiment comparison. The authors mainly treat TraDiffusion as a baseline, a training-free based method. However, it is still necessary to compare with other popular baselines like ControlNet and InstanceDiffusion etc. Despite they are not training-free methods, the ControlNet is already well-developed enough and very easy to use when just input the layout/skeleton images. The authors can also consider making them integrated into ControlNet to show the orthogonal ability of the proposed method. Moreover, the authors also should compare with other training-free methods especially leveraging the guidance such as [1] or some feature injection methods [2]. I recommend the authors do a more comprehensive comparison to other training-free methods.\n\nMore importantly, the authors point out a good point that it is an energy function method (Line 185). I suggest the authors could elaborate more about what the differences between energy-based methods and the other gradient-based method [1], and the advantage compared to feature injection methods like ControlNet and [2] etc. The authors can consider to analog the diffusion generation to the SGD, which is pointed out in [3]. These discussion will make the paper more theoretically-sound.\n\n[1]. Universal Guidance for Diffusion Models\n[2] Plug-and-Play Diffusion Features for Text-Driven Image-to-Image Translation\n[3] Zero-to-Hero: Enhancing Zero-Shot Novel View Synthesis via Attention Map Filtering" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What is the difference between the control loss and the backward guidance in [1]?\n2. When conducting qualitative ablation, can the authors use the same text and trajectory prompt to ensure the soundness?\n\n\n[1] Chen, Minghao, Iro Laina, and Andrea Vedaldi. \"Training-free layout control with cross-attention guidance.\" Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2024." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper carefully studied different attention layers in the UNet, proposing corresponding guidance, low resolution layout control, and high resolution shape control.\n2. Control Loss and Suppress Loss is designed for layout guidance, and Fix Loss for shape guidance.\n3. Extensive ablations are conducted to verify the effectiveness." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces training-free TRADIFFUSION++, aiming to solve the problem of TraDiffusion that cannot handle complex trajectories. A hierarchical guidance mechanism is designed, including layout guidance for low-resolution control, and shape guidance for high-resolution shape refinement. An IoT metric is introduced to evaluate the trajectory-based generation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. More elaboration of control loss compared to [1] is needed. What is the difference?\n2. More qualitative ablation studies can be done, using the same text and trajectory prompt, to verify the effectiveness.\n3. More visulization results are expected, such as more complex trajectories with overlapping\n\n[1] Chen, Minghao, Iro Laina, and Andrea Vedaldi. \"Training-free layout control with cross-attention guidance.\" Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- A comparison against segmentation-based approaches would be beneficial in finding use cases where trajectories can be useful over segmentation masks. This can simply be done by dilating the trajectories to form segmentation masks.\n\n- Some generated images in Figure 8 and Figure 15 look unnatural/distorted, such as the surfer and the teddy bear. What is the reason for that?\n\n- What is the reason for the degraded FID when different losses are added?\n\n- In equation (4), is the denominator needed?\n\n- IOT is not clearly explained. You mention that you predict a mask with Yolo. Do you compute the IOU between the trajectory and the segmentation mask? If yes, why do you think this is a suitable metric for evaluating abidance to the provided trajectories?\n\n- Why is the guidance function computed 5 times during inference?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper is easy to follow and the proposed method is clearly explained.\n\n- Several figures were provided to illustrate different components of the pipeline in details." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces an enhanced version of TraDiffusion that enables both coarse and fine control over image generation using input trajectories. \nTraDiffusion offered only coarse control by focusing on low-resolution cross-attention maps, limiting its capability to follow complex trajectories. \nTo address this, the paper presents a Hierarchical Guidance Mechanism (HGM) with three guidance losses to facilitate fine-grained trajectory control. \nBy considering different cross-attention resolutions, the proposed approach can control both object layout and shape. \nFor evaluation, the paper proposes a dataset and metric specific to trajectory control. \nExperiments demonstrate that the approach effectively manages complex trajectories and yields stable results across different seeds." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I generally find \"trajectories\" less intuitive and user-friendly than \"masks,\" particularly for articulated objects like humans or animals. Drawing a trajectory or skeleton for such objects requires some level of artistic skill to envision a natural pose, which can be challenging.\nOn the other hand, masks don’t impose strict control on the model, giving it the freedom to generate a natural appearance for the object.\nFor example, how would one know that the \"lambda\" shape in Figure 6 is suitable for a cat? Similarly, the teddy bear in Figure 8.\nIt is evident in Figure 15 (the kid and bus) that unintuitive trajectories can lead to distorted objects. I believe that the usability of trajectories needs to be investigated compared to masks through a user-study.\n\n- The main contribution, consisting of the three losses, is relatively minor and heavily inspired by [1]. The main difference is using a trajectory mask rather than bounding boxes (rectangular masks) to optimize different cross-attention maps. In fact, at low-resolution layers, it becomes identical to that of [1] for small trajectories, as illustrated in Figure 5. Therefore, I find the technical novelty of the proposed approach quite limited.\n\n- The insights mentioned in section 3.2 are not new and were discussed in detail in several earlier works such as [2-4]. How do the insights provided in the paper differ from those in [2-4]?\n\n- The \"Coordinate Transformation\" operation in Figure 3 dilates the trajectories, making them closely resemble segmentation masks. Consequently, a comparison with segmentation-based approaches, such as those in [5,6], is necessary to determine whether \"trajectories\" offer any advantage over segmentation maps.\n\n[1] Chen, Minghao, Iro Laina, and Andrea Vedaldi. \"Training-free layout control with cross-attention guidance.\" Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2024.\n\n[2] Hertz, Amir, et al. \"Prompt-to-prompt image editing with cross attention control.\" arXiv preprint arXiv:2208.01626 (2022).\n\n[3] Tang, Raphael, et al. \"What the daam: Interpreting stable diffusion using cross attention.\" arXiv preprint arXiv:2210.04885 (2022).\n\n[4] Liu, Bingyan, et al. \"Towards Understanding Cross and Self-Attention in Stable Diffusion for Text-Guided Image Editing.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\n\n[5] Yunji Kim, Jiyoung Lee, Jin-Hwa Kim, Jung-Woo Ha, and Jun-Yan Zhu. Dense text-to-image\ngeneration with attention modulation. In Proceedings of the IEEE/CVF International Conference\non Computer Vision, pp. 7701–7711, 2023.\n\n[6] Guillaume Couairon, Marlene Careil, Matthieu Cord, Stephane Lathuiliere, and Jakob Verbeek. ´\nZero-shot spatial layout conditioning for text-to-image diffusion models. In Proceedings of the\nIEEE/CVF International Conference on Computer Vision, pp. 2174–2183, 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024tradiffusionhierarchical,\ntitle={Tradiffusion++:Hierarchical Guidance for Fine-Grained Trajectory-Based Image Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xTsvE8gOPT},\nnote={under review}\n}" }, "abstract": { "value": "Currently, many training-free methods based on diffusion models allow controllable generation. These methods, such as TraDiffusion, introduce control through additional trajectory input. While they are more user-friendly than traditional methods, they offer only coarse control over the Stable Diffusion (SD) model. We observe that SD focuses more on layout control at lower resolutions of cross-attention and shape control at higher ones. Based on this, we propose TraDiffusion++, which introduces a Hierarchical Guidance Mechanism (HGM) for finer-grained control in generation. HGM includes three key components: Control Loss (CL), Suppress Loss (SL), and Fix Loss (FL). CL aligns the layout with the trajectory across layers. SL suppresses objects outside the trajectory at lower resolutions. FL refines regions not fully controlled by the trajectory using attention feedback at middle and high resolutions. The combination of CL and SL ensures effective layout control. The interaction between CL and FL improves shape generation. We build a dataset with simple and complex trajectories. Experiments show that TraDiffusion++ achieves stable layout control and fine-grained object generation. This also reveals new insights into SD’s control mechanisms." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Diffusion models; Trajectory control; TraDiffusion++; Training-free methods; Controllable generation; Stable Diffusion (SD); Fine-Grained Control" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/5c4db20c692aaa2b8bbaad442c88aeea91c46163.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/eec223732fff9932b9645fdb211dbaa4f4d8d9d4.zip" }, "title": { "value": "Tradiffusion++:Hierarchical Guidance for Fine-Grained Trajectory-Based Image Generation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]