id
stringlengths 10
10
| title
stringlengths 3
179
| track
stringclasses 1
value | status
stringclasses 3
values | keywords
stringlengths 2
2.39k
| primary_area
stringclasses 21
values | author
stringclasses 501
values | authorids
stringclasses 501
values | aff
stringclasses 1
value | aff_domain
stringclasses 1
value | position
stringclasses 1
value | rating
stringclasses 355
values | confidence
stringlengths 0
19
| soundness
stringclasses 642
values | contribution
stringclasses 596
values | presentation
stringclasses 782
values | rating_avg
float64 0
9
| confidence_avg
float64 0
5
| soundness_avg
float64 0
4
| contribution_avg
float64 0
4
| presentation_avg
float64 0
4
| corr_rating_confidence
float64 -1
1
| project
stringclasses 1
value | github
stringclasses 1
value | Review
listlengths 2
10
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
zi8YBcmXqA | PokeChamp: an Expert-level Minimax Language Agent for Competitive Pokemon | main | Active | multiagent;LLM agents;competitive games;game theory;reinforcement learning | foundation or frontier models, including LLMs | 3;3;6;6 | 4;3;3;4 | 2;2;3;3 | 1;2;3;3 | 2;2;3;3 | 4.5 | 3.5 | 2.5 | 2.25 | 2.5 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "No"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How does PokéChamp’s computation time compare to that of PokéLLMon, which also utilizes GPT-4o, considering the additional requirements for minimax tree search and LLM queries?\n \n2. How many human players were involved in obtaining the online ladder results presented in Table 5?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. **Novel Integration of LLM with Minimax**: The paper innovatively combines an LLM with a minimax search to simulate human-like decision-making in Pokémon battles. This approach enables competitive performance without additional training and is adaptive to partially observable information.\n \n2. **Performance on Real-World Benchmarks**: PokéChamp’s efficacy is validated in real-world benchmarks and against heuristic bots, achieving a high Elo rating of 1500 and consistently outperforming other state-of-the-art agents. \n\n3. **Comprehensive Dataset and Benchmarks**: The paper provides a large dataset of over one million Pokémon battles, including 150,000 high-Elo games. These benchmarks, based on real player data and tailored puzzles, significantly enhance the study’s reliability and offer a valuable resource for further research in this domain."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces PokéChamp, a large language model (LLM)-powered agent designed for competitive Pokémon battles. The agent integrates three LLM-enabled components for action sampling, opponent modeling, and state evaluation, which enable it to make informed and strategic decisions during gameplay. It demonstrates superior performance over existing bots and heuristic-based models and achieves a top 10% ranking in online Pokémon battles."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Limited Prediction Accuracy for Opponent Modeling**: The limited accuracy in human and opponent action prediction, with opponent prediction only reaching 13–16%, may constrain the overall performance of the method, which relies on accurate opponent modeling.\n\n2. **Limited Exploration of Depth-Limitation Trade-offs**: The choice of depth-limited minimax search is justified as a balance between computational feasibility and decision quality. However, the trade-offs between search depth, LLM accuracy, and action quality are not thoroughly analyzed. Further exploration, potentially with ablation studies, would clarify the impact of depth limitations on performance.\n\n3. What is the role of Nash equilibrium in this paper? The paper does not seem to analyze the Nash equilibrium outcomes, which makes the definition of Nash equilibrium in Section 2 appear somewhat disconnected. It would be beneficial to include Nash equilibrium results in addition to Elo.\n\n4. How accurate is the next-state prediction? Since the minimax search relies on simulated rollouts of actions, the accuracy of next-state predictions could significantly impact the agent's performance."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* How does the mathematical formalization in Section 2 relate to the design of the agent?\n* Can additional fine-tuning with the collected data improve the performance of the agent?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* The paper introduces a novel integration of LLMs with minimax search. The agent leverages LLMs for three key components of minimax search: action sampling, opponent modeling, and value calculation. This integration allows the agent to employ human-like strategic thinking, bringing an expert-level game-playing agent.\n\n* The authors present a comprehensive set of experiments that demonstrate the capabilities of the agent across different competitive settings. The online ladder performance against human players with a competitive Elo rating provides a real-world evaluation of the agent.\n\n* he paper is well-organized and presented in a logical structure, allowing readers to follow both the technical intricacies and high-level motivations of the research."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces an LLM agent, PokeChamp, for competitive Pokemon battles. The model leverages a depth-limited minimax search to play the game, where the LLM plays the role of action sampling, opponent modeling, and state value calculation. The agent is shown to outperform all existing AIs significantly and achieve expert performance on the online ladder."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The agent's design heavily relies on an in-depth understanding of competitive Pokemon gameplay, and its success relies on domain-specific engineering in the action sampling, opponent modeling, and value calculation components. While these adaptations make the agent effective in this domain, they limit the model’s generalizability to other game-playing tasks with different mechanics or structures.\n\n* The idea of integrating LLMs with the minimax search framework for game-playing agents is closely related to prior work by Guo et al. (2024), which explores a similar concept in two-player zero-sum games.\n\n* While the paper provides a mathematical formalization of POMGs and makes assumptions like perfect recall, the connection between this theoretical framework and the practical implementation of the agent is not entirely clear.\n\n* The paper lacks an ablation study that examines the impact of each LLM-based component within the minimax search framework on the agent's overall performance. Since the authors use the LLM to replace three primary components, an ablation study would be invaluable in demonstrating how each component contributes to the agent's success.\n\n\nGuo, Wei, et al. \"Minimax Tree of Thoughts: Playing Two-Player Zero-Sum Sequential Games with Large Language Models.\" ICML 2024 Workshop on LLMs and Cognition."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Confusion of damage calculator: In line 92-94, the authors mention that this external tool \"calculates the core game engine capabilities in combination with loading historical data from real player games in order to load likely stats for the opponent’s team\". I didn't quite get this expression. Also, I found this definition conflicting in Figure 3 where the calculator seems to just output the number of turns needed to KO opponent's current Pokemon for each possible moves of player's current Pokemon. \n2. Action Prediction: The goal of the work is making LLM agent game theory aware, yet the 1M dataset collected are of human plays. I wonder how game theory optimal are those data? If not, what is the point of accurately predicting opponent's action when those action can be bad moves?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The authors propose a novel application setting: competitive pokemon, where the turn-based nature of the game leads to a nice formulation as POMG. They manage to construct a minimax tree with the help of a LLM prior. PokeChamp is able to achieve top human performance in real game settings."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "PokeChamp aims to bridge the gap in game theory aware LLM agents. The work uses competitive Pokemon as their case study and propose a minimax search method. Specifically, LLM is used in three key components in constructing the minimax tree: action sampling, opponent modeling, and state value calculation. PokeChamp exhibits good performance against human players."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper is not so well-written: Missing multiple figures, tables, and appendix that is referred to in the main body. Also, as someone not familiar with competitive Pokemon, I found some of the concepts like Damage Calculator hard to grasp. It would be very helpful if you could add explanations of how the game works.\n2. Overall purpose of the work: It's hard to understand the contribution of this work. While the application case is interesting, I don't see this general framework being applicable to other games. For most games, it is not realistic to use LLM to replace state value function unless the LLM itself has enormous knowledge on the game."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please refer to the Weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The idea of utilizing LLMs for competitive gameplay is interesting, and the results seem promising."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents PokéChamp, a large language model (LLM)-powered agent designed for competitive Pokémon battles. Utilizing the minimax approach, PokéChamp integrates LLM-driven action sampling, opponent modeling, and value estimation to achieve strong performance in two-player, turn-based Pokémon games. The paper also introduces a dataset of Pokémon battles and benchmarks the system's performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The biggest weakness of the paper is that the proposed method is specifically designed for Pokémon, which severely limits its generalizability. This also narrows the paper's target audience, and it's unclear how the methodology could be transferred or applied to other problems. The paper lacks a discussion on whether the framework would work for more complex, real-world competitive tasks, or for different game types where randomness, complexity, and strategic depth may vary significantly.\n\nThe writing also has significant room for improvement. First, readers unfamiliar with Pokémon may find it difficult to understand many details in the paper. Additionally, several concepts are introduced before being properly defined, such as the Abyssal bot on line 197 and EV/IV on line 240. Furthermore, in Section 2, \"MATHEMATICAL FORMALIZATION,\" numerous symbols and terms are defined, but these concepts are not used in the subsequent main text. It's unclear what purpose this section serves—perhaps it was included simply to make the paper appear more mathematical?\n\nThe minimax-based approach combined with LLMs may not be as novel as it initially appears. Minimax tree search has been extensively explored in AI for games, and while integrating LLMs offers an interesting twist, the underlying framework is still fundamentally a minimax search, which limits the novelty. Additionally, there is no evidence that PokéChamp advances the state of the art in game-theoretic modeling beyond prior work in other competitive games such as chess, Go, or poker.\n\nThe paper heavily relies on heuristic tools like damage calculators and historical data, raising concerns about the system’s true adaptability. This reliance on pre-defined tools limits the agent's flexibility and its ability to dynamically adapt to new or unseen scenarios. This suggests that the system lacks generalizability beyond the specific setup of competitive Pokémon, making the approach less scalable to other domains or even future game updates.\n\nThe accuracy of opponent modeling also remains a concern. The relatively low accuracy in predicting opponent actions suggests that more refined or adaptive modeling techniques may be needed to further enhance performance.\n\nLastly, while the paper acknowledges the limitations of LLMs in planning and strategy, it fails to convincingly address these issues. The reliance on LLMs for action sampling and opponent modeling could lead to brittle decision-making, especially in cases where long-term strategy and deep reasoning are required."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024pokechamp,\ntitle={PokeChamp: an Expert-level Minimax Language Agent for Competitive Pokemon},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zi8YBcmXqA},\nnote={under review}\n}"
},
"abstract": {
"value": "We introduce \\texttt{Pok\\'eChamp}, a Large Language Model (LLM) powered game-theoretic aware agent for two-player competitive Pok\\'emon battles, that uses an LLM prior and collected high-Elo human data to model minimax search without any additional training. \\texttt{Pok\\'eChamp} uses a depth-limited minimax search online where the LLM replaces three key components: 1) action sampling from the LLM guided by prompts (including from a damage calculation tool), 2) opponent-modeling via the historical likelihood of actions from our dataset to model the effect of LLM-predicted opponent actions, and 3) state value calculation for the LLM to reflect on each intrinsic state. \\texttt{Pok\\'eChamp} outperforms all existing AIs (76\\%) and heuristic bots (84\\%) by an enormous margin, including winning consistently (>50\\%) against prior human-parity work run with a frontier model, GPT 4-o, while using an open-source 8 billion parameter Llama 3.1 model. \\texttt{Pok\\'eChamp} achieves expert performance in the top 10\\% of players on the online ladder against competitive human players at an Elo of 1500. Finally, we collect the largest Pok\\'emon battling dataset, including 1 million+ games with 150k+ high Elo games, prepare a series of battling benchmarks based on real player data and puzzles to analyze specific battling abilities, and provide crucial updates to the local game engine. Our code is available \\href{https://sites.google.com/view/pokechamp-llm}{online}."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"multiagent",
"LLM agents",
"competitive games",
"game theory",
"reinforcement learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/f5dd1b2c8bdc9ba01d62aaf306bfeee07e80996e.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/15516911a0fc453642d037a66024f811e2ec009f.zip"
},
"title": {
"value": "PokeChamp: an Expert-level Minimax Language Agent for Competitive Pokemon"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ziB549CQ30 | Solving the Fuzzy Job Shop Scheduling Problem via Learning Approaches | main | Active | Fuzzy job shop scheduling problem;neural combinatorial optimization;self-supervised learning | optimization | 3;3;3;5 | 3;5;4;3 | 2;1;2;3 | 1;1;2;2 | 2;2;2;3 | 3.5 | 3.75 | 2 | 1.5 | 2.25 | -0.522233 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The proposed method is not compared with state-of-the-art methods, including heuristic methods and reinforcement learning methods. \n\n2. Some arguments are not true or correct, particularly the argument related to FJSSP and the dynamic or fuzzy processing time. The JSSP problems have been investigated for years and the dynamic ones, or the fuzzy ones have also been solved by using many other algorithms, while the paper totally ignored existing work.\n\n3. The parameter setting is not justified. Or ablation analysis on parameter settings is needed.\n\n4. Please clarify that the running time of the proposed algorithm does not include the training time of the neural networks. \n\n5. I am wondering whether the compared method, CP also use a large training set to train a model and then is tested on the test set. If it is not, the comparison will not be fair.\n\n6. The analysis of learning with perturbation is desirable. How it is achieved to achieve the goal shown in Fig. 2. It is also necessary to discuss its pros. and cons.\n\n7. Since JSSP problems do not have true labels, the reinforcement learning method is another learning method commonly used for this problem. How is the proposed method compared with the reinforcement learning method?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.A new problem of JSSP is investigated and solved, considering the fuzzy processing time.\n\n2. The proposed method performs better than the compared method.\n\n3. The proposed method can achieve self-learning by using pseudo labels for training.\n\n4. The idea is good and the paper is well-written."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a self-supervised algorithm for the FJSSP (SS-FJSSP) that employs an iterative mechanism to refine pseudo-labels, progressively transitioning from suboptimal to optimal solutions. The proposed method showed better performance than the constrained programming methods on the FJSSP problems."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The experiments are not sufficient to show the contributions of the proposed method without comparing other methods.\n\n2. The novelty is limited since the fuzzy job shop scheduling problem has been investigated and the paper looks like an extension of existing work by adding the fuzzy processing time."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Why isn't P_p the focus of a sensitivity analysis?\n2. Why isn't HTS compared against? Where do the runtimes for CP come from? Why should I care that this method is faster than an exact method?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper organized well, but contains some really strange text at times. I find it hard to believe the line \"The surge in deep learning has catalyzed the emergence of neural combinatorial optimization (NCO) sparking a burgeoning interest in leveraging...\" was not generated by an LLM (note that I do not believe the paper was written by an LLM, just this part). Regardless of where it comes from, it is not so good. The document is overall understandable, the typos do not impact this. It would be best, though, if the authors used GPT to support fixing typos, or just watch the spellchecker (e.g., 'algotirhm' on page 6)\n\nThe main novelty is the application of the semi-supervised learning approach from Corsini et al. (2024) (note that this paper is cited with its arxiv version, but actually it is accepted to NeurIPS 24) to the FJSSP with a few twists. There are some interesting ideas in here. The semi-supervised approach is particularly interesting, although the novelty in this paper is (1) only a small advancement over the previous work of Corsini et al. and (2) not examined in an ablation analysis at all (see below). The second claim to novelty about a \"refinement process\" seems to me to just be the Corsini paper again, so I am not sure what is meant here -- the perturbation technique?"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "A semi-supervised learning approach for the fuzzy job shop scheduling problem (FJSSP) is presented. Experimental results are provided on a subset of the standard instances for this problem and compared against a constraint programming baseline."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There are two major issues with the paper that right now prevent me from being more positive about it.\n\n1. The new perturbation strategy is not experimentally validated at all. First note that it is said that \"Corsini et al. (2024) ... select the optimal solution as the pseudo-label\". My understanding of the Corsini paper is that they use the best solution from the sample. I assume that this is just a typo though. More importantly, where does P_p come from and who sets it? Why is this value not experimentally validated? The experimental section says that it is set to 0.05 and that's all. First, turning it off is an important ablation step, and showing the sensitivity is also very interesting. I also wonder why the authors do not try this technique out on the original JSSP? It would improve the generalizability of this paper, which is rather sharply focused on the FJSSP right now and, in all honesty, ICLR is not an applications conference. The reviewers need to at least believe there is a chance of generalization here, but right now there is no evidence.\n2. The experimental results are not complete. They compare the approach with a CP model, mostly lose, and then claim victory because it is faster. First, CP is proving optimality and this method is not, so it is a completely apples to oranges comparison. Second, where do these results even come from in the Afsar et al. (2023) paper? I do not see these runtimes; on the contrary, Table 5 in the Afsar paper indicate that S6.1-4 only need 0.1 seconds to solve to optimality and the heuristic approach HTS only 0.2 seconds. On S10.1-4 on average 76 seconds are needed for the CP model and 1.0 for HTS. This brings me to a question: where is HTS in this work? How does HTS compare to the SS-FJSSP approach? I note that this comparison is quite important as SS-FJSSP has many domain specific components in it, so the method is not interesting in and of itself -- it ought to be as good as the current techniques available. One more note here: the authors make note that they do not re-implement the CP model because the source code is not available... but we are talking about a model with 5 constraints, this would not be much of a challenge...\n\nI note that I am also a bit wary about the FJSSP in general. The fuzzy literature makes rather simplifying assumptions (triangular distribution), thus avoiding any of the hard parts of modeling stochastic optimization problems while still claiming to be more realistic. While I agree that it is more realistic than a standard JSSP, the question is whether it is really worth the trouble compared to just modeling a two-stage, robust or chance constrained problem. My opinion is that it is not because the triangular distribution is unrealistic for a variety of reasons (real distributions are often shifted, can have long tails, etc.). Thus, I think the authors are betting all their money here on a rather weak application that is not at the level of top conferences.\n\n\nSome other notes about the paper for the authors to improve:\n1. The references in the introduction about the FJSSP are not terribly convincing that it is an important problem. For a paper at a top conference, I would expect papers at top conferences (or an argument why those venues have overlooked this important problem)\n2. The second paragraph of the introduction is not well written; it is very long and just meanders through the literature.\n3. Page 3: Additional should be addition\n4. The explanation about the arrow operator in 2.2 is unclear (namely i->x or x->j) -- I think this can be interpreted (incorrectly) as banning chains of operations.\n5. The mathematical model ought to have a min before the max so it is more clear. Also the math model does not use the max operator in the constraints, which seems like it ought to be necessary? I admit I am not so familiar with modeling with fuzzy variables, so maybe there is a convention there I am not aware of.\n6. The jobs numbers ought to be more clear in Figure 1 so it is easier to understand the scheduling scheme."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. This paper lacks contributions and originality. It just proposed a network based on an encoder and decoder for FJSSP. The encoder-decoder architecture is common in a deep learning network. Therefore, the author should discuss how to design the encoder and the decoder in detail. \n2. The parameter analysis and architecture of the proposed should be further discussed. The paper uses GAT as the backbone. Is it possible to use another network to solve the problem?\n3. The effectiveness of the proposed method is questionable. As shown in Table 2, the results obtained from SS-FJSSP are not significantly different from those obtained with CP.\n4. The paper lacks parameter analysis and ablation study, making a gap between the results and the conclusion.\n5. The comparison of RT is not fair. SS-FJSSP and CP are not executed in the same environment. The training time of SS-FJSSP should be included in the final RT. Additionally, SS-FJSSP uses GPUs to accelerate computation, while CP is just implemented on a CPU."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The paper proposed a deep network to address addresses this gap by examining the potential of neural networks to process fuzzy information for solving FJSSP."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed a self-supervised algorithm for FJSSP (SS-FJSSP) that uses an iterative mechanism to refine pseudo-labels, moving from suboptimal to optimal solutions. Meanwhile, this algorithm effectively bypasses the common neural combinatorial optimization challenge of obtaining true labels. This paper addresses this gap by examining the potential of neural networks to process fuzzy information for solving FJSSP."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The authors just compared their method with one method, which is not enough to verify the effectiveness of the proposed method. There are state-of-the-art evolutionary computation algorithms for FJSSP. What is the advantage of the proposed method over them? The comparison between the proposed method and them should be implemented to analyze the difference. Meanwhile, the proposed algorithm is not very competitive in items of FMS, and its performance is poorer than that of the comparison algorithm in some problems. \nThe paper lacks parameter analysis and ablation study, making a gap between the results and the conclusion."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "I do not have ethics concerns."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Could you provide more detail on the technical novelty of the fuzzy model-related features and the feature vectors introduced in your method? How do these features fundamentally differ from existing studies that use similar high-level graph representations for job shop scheduling or similar problems?\n\nThe random solution selection mechanism is mentioned as part of your training strategy. Can you offer a more thorough explanation, both theoretically and empirically, to justify the effectiveness of this technique? Specifically, how does this random solution selection improve performance compared to existing methods, and how does it contribute to the near-optimal solutions?\n\nThe baseline algorithms used for comparison are quite dated (8-9 years old). Are there any recent state-of-the-art methods, particularly involving newer GNN architectures, that could be included in the experiments? How would incorporating these newer baselines demonstrate the technical advancements of your approach?\n\nHave you considered other recent GNN-based architectures for job shop scheduling or other related problems? How does your approach specifically improve on those?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The integration of neural networks with fuzzy scheduling and self-supervised learning seems to be interesting and important.\n\nThe paper is generally well-structured, and the explanations of both the problem and the proposed solution are clear.\n\nThe proposed solution addresses a critical issue in fuzzy scheduling by improving computational efficiency while maintaining accuracy."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses the Fuzzy Job Shop Scheduling Problem (FJSSP), an extension of the traditional job shop scheduling problem that incorporates uncertainty, better reflecting real-world manufacturing environments. The authors proposed a self-supervised algorithm (SS-FJSSP) that employs neural combinatorial optimization to handle fuzzy data and solve FJSSP. The method is computationally efficient, achieving comparable results to state-of-the-art algorithms on several benchmark problems."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My primary concern about this paper is regarding its novelty. While using neural combinatorial optimization (NCO) techniques to solve FJSSP problems might be novel, the high-level graph representation of the FJSSP problem adopted in this paper is very similar to many existing studies. It seems that the key difference is the introduction of fuzzy model related features and feature vectors. However, the technical novelty associated with these features has not been properly highlighted and justified.\n\nMeanwhile, the training strategy does not seem to be new. Perhaps random solution selection is a novel element of the training algorithm. Nevertheless, the effectiveness of this selection mechanism is only discussed intuitively. More thorough theoretical and empirical analysis may be necessary to clearly quantify its importance for the newly proposed NCO system to find near-optimal solutions. To my understanding, the analogy with genetic algorithms does not reveal the true advantages of using random solution selection techniques, since the proposed NCO system is not closely related to genetic algorithms.\n\nThe baseline algorithms adopted in this paper were published about 8 to 9 years ago. It is important to include more recent baselines in the experimental comparison to truly understand the technical advancement introduced by the new approach. In particular, to clearly demonstrate the advantages of the newly developed encoder-decoder network architecture, several existing approaches using different GNN architecture designs should be experimentally examined and compared in this paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024solving,\ntitle={Solving the Fuzzy Job Shop Scheduling Problem via Learning Approaches},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ziB549CQ30},\nnote={under review}\n}"
},
"abstract": {
"value": "The fuzzy job shop scheduling problem (FJSSP) emerges as an innovative extension to the conventional job shop scheduling problem (JSSP), incorporating a layer of uncertainty that aligns the model more closely with the complexities of real-world manufacturing environments. This enhancement, while enhancing its applicability, concurrently escalates the computational complexity of deriving solutions. In the domain of traditional scheduling, neural combinatorial optimization (NCO) has recently demonstrated remarkable efficacy. However, its application to the realm of fuzzy scheduling has been relatively unexplored. This paper aims to bridge this gap by investigating the feasibility of employing neural networks to assimilate and process fuzzy information for the resolution of FJSSP, thereby leveraging the advancements in NCO to enhance fuzzy scheduling methodologies. To this end, we present a self-supervised algorithm for the FJSSP (SS-FJSSP). This algorithm employs an iterative mechanism to refine pseudo-labels, progressively transitioning from suboptimal to optimal solutions. This innovative approach adeptly circumvents the significant challenge of procuring true labels, a common challenge in NCO frameworks. Experiments demonstrate that our SS-FJSSP algorithm yields results on a par with the state-of-the-art methods while achieving a remarkable reduction in computational time, specifically being two orders of magnitude faster."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Fuzzy job shop scheduling problem",
"neural combinatorial optimization",
"self-supervised learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/eaaefb43ceae2b935ef2054dc410225736219514.pdf"
},
"presentation": null,
"primary_area": {
"value": "optimization"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Solving the Fuzzy Job Shop Scheduling Problem via Learning Approaches"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ziw5bzg2NO | Do You Keep an Eye on What I Ask? Mitigating Multimodal Hallucination via Attention-Guided Ensemble Decoding | main | Active | Hallucination;Multimodal Hallucination;Large Vision-Language Model | generative models | 5;5;6;6 | 4;3;3;5 | 3;3;3;3 | 2;2;3;3 | 4;3;4;3 | 5.5 | 3.75 | 3 | 2.5 | 3.5 | 0.301511 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. it is unclear how to get the $d\\times d$ attention matrix in L161, Section 3.1, where d is the number of image patches. Can the author elaborate the dimension of $Q_t$ and $K_t$ in detail?\n2. As for the logit ensemble, the generated logit should has the same of the size of LLM tokenzier? It is unclear why the proposed adaptive plausibility constraint improves performance by only keeping the elements in #p_ED# which are larger than a threshold?\n3. Do we have any assumption that the length of the generated tokens have the same length? How should we do the logit ensemble when we have different number of generated tokens for different images.\n4. Even though the proposed method is training-free, it also introduces a lot of hyper-parameters $\\alpha$, $\\beta$, $N$, $H$ and $K$. how to tune these hyper-parameters is unclear."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The ensemble decoding strategy proposed in this work is well-motivated and sound by zooming into local regions which is more important for decoding at each decoding step. The proposed method also does not require any training.\n2. Starting from the baseline model, several optimization techniques have been proposed to improve model accuracy and running efficiency, including adaptive plausibility constraint to only keep the most plausible tokens, and a fast version ED to only focus on one particular sub-region in the image.\n3. Strong performance and new state-of-the-arts have been reported on several benchmarks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a new Ensemble Decoding (ED) strategy to mitigate the object hallucination issue with Large Vision-Language Models (LVLMs). The proposed approach is motivated by the hypothesize that irrelevant objects and low object resolution in images are likely to impact performance negatively. In ED, the input image is split into sub-images and combines logit distributions by assigning weights through the attention map. ED adaptive plausibility constraint is proposed to calibrate logit distribution. The FastED, an optimized variant of ED, balances performance with speed by selecting a sub-image with the highest mean attention score from the original image. ED shows improved performance in several benchmarks"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The proposed ensemble encoding strategy, fuses the logits extracted from the original image and multiple local crops. This method makes sense, but introduces much more computation burden for real-world deployment. The FastED only processes one local crop and have some accuracy drops. However, one simpler alternative is to move the ensemble to the input, and concat the multiple images in the prompt. We can imagine that this method is much faster than ED/FastED because only one feed-forward is needed. There is no discussion about where to do the ensemble."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "- Was there any weighting operation applied during the ensemble process?\n- Can the authors discuss the relationship between this method and model uncertainty? When we perform multiple forward passes on the model and take the average, we are essentially obtaining model uncertainty. Can this method be understood as adding more logical constraints, but essentially optimizing the answer through model uncertainty?\n- Besides logit ensemble, are there other levels of ensemble that could be attempted?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The idea of Ensemble Decoding (ED) is interesting, and well motived by evidences of model will get right answer after applying crop and resize to the image and the toy experiment of checking whether properly divided sub-images can reduce object hallucination in the outputs o LVLMs\n- The authors conducted extensive ablation studies and main experiments to validate the effectiveness of the method, and the experimental results appear to be quite convincing."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses the issue of object hallucination in Large Vision-Language Models (LVLMs), where models generate descriptions that include nonexistent objects or misrepresent existing ones. While previous approaches such as data augmentation and training-free methods have attempted to mitigate this problem, they face scalability issues and often rely on external modules. The authors propose a new method called Ensemble Decoding (ED), which divides input images into sub-images and combines logit distributions by weighting them through an attention map. Additionally, the paper introduces ED’s adaptive plausibility constraint to calibrate logit distributions and a variant named FastED for speed-critical applications. Extensive experiments demonstrate that this method achieves state-of-the-art performance on hallucination benchmarks, confirming its effectiveness in reducing object hallucinations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- There doesn’t seem to be a deeper explanation for why this method works. It would be better if the authors could provide more insights.\nHow do the degree of cropping and resizing, as well as the attention weights, affect the results? I don’t seem to see any related discussion or analysis on this.\n\n- How do the degree of cropping and resizing, as well as the attention weights, affect the results? I don’t seem to see any related discussion or analysis on this."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "When splitting the original image into sub-images, is there no overlap at all between each sub-image?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. It is interesting to explore the effect of the number of unnecessary objects in an image and the object resolution on the performance of LVLM.\n2. The proposed method is simple and straightforward to implement, which enhances its reproducibility.\n3. Experiments on multiple benchmarks demonstrate that the proposed method can achieve better performance than state-of-the-art work."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Large Vision-Language Models (LVLMs) excel at tasks like image captioning but face challenges with object hallucination, where they describe objects that aren't actually present in images. While existing solutions like data augmentation have attempted to address this issue, they face scalability problems and often require additional modules. This paper introduces Ensemble Decoding (ED), which works by dividing input images into smaller parts and combining their logit distributions using attention map-based weighting. It also develops ED adaptive plausibility constraint for logit distribution calibration and a faster variant called FastED for time-sensitive applications. Through extensive testing on hallucination benchmarks, the proposed method demonstrates superior performance compared to existing approaches."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I have some concerns about this paper:\n\n1. From Figure 3, it seems that feeding multiple sub-images into LVLM gives more correct answers than feeding the original image. I'm curious about the performance of ED without using the original image.\n2. If LVLM produces the correct output for the original image, but produces the wrong output for the sub-image, does ED negatively affect the understanding of the model in this case? For example, can the split sub-image get a valid output when the target object is located at the centre of the image?\n3. It is interesting that low-resolution objects may trigger the hallucination of large models. However, dividing the image into small sub-images does not improve the absolute resolution of the object but rather the percentage of the object in the image.\n4. How is the metric of inference latency calculated? ED only slightly outperforms AGLA in terms of recall at the cost of more than twice the inference latency; FastED is close to AGLA in terms of efficiency but has lower recall. So what is the advantage of the method proposed in this paper over AGLA? The authors only show the performance of FastED on the CHAIR dataset, which makes its performance difficult to evaluate.\n5. In this paper, there is a lack of analysis and ablation experiments on the impact of the constituent modules in ED (e.g. Attention-Guided Weight, and Adaptive Plausibility Constraint) on the performance of the method.\n6.The exact number of sub-images into which the image is split (determined by the hyperparameter N) may have a strong impact on the efficiency and performance of the ED, and the impact of this factor is not analysed in this paper.\n7. From Table 4, the improvement from aggregating the logits of sub-images does not seem to be significant."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Please refer to the weaknesses section for questions. Additionally: \n\n1) Have you tried the recent Qwen-2 VL model with your technique? Given that it does not use a Q-former based technique, it would be interesting to see how this method works with a model architecture other than LLava\n2) If possible, could you provide some additional examples of attention maps and final attention weights for the N sub-images for different sets of prompts and input images? It would be helpful to understand how exactly the attention weights vary."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1) The paper is very well written and it is easy to understand their method. \n2) The approach presented is easy to implement, is training-free and only requires inference time re-weighting of logits. \n2) The pilot study which talks about the main causes of object hallucination in MLLMs is instructive and useful to the community as a whole. \n3) It is easy to understand the motivation of their method from the conclusions of the study they present, which shows that masking irrelevant regions of an image helps reduce object hallucination, and their approach of ensembling logits from various sub-images should help with that\n4) The results achieve SoTA performance on commonly used benchmarks to measure hallucination (POPE and CHAIR)"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a technique (ED) to reduce hallucination in multi-modal large language models (MLLMs) by splitting an image into N sub-images followed by using an attention guided ensemble decoding approach which ensembles the logits for the next token to be predicted by the MLLM using the attention weights of the original full-sized image and the N sub-images with the text prior. The authors conduct a systematic study which shows the two main causes of object hallucination in MLLMs as the number of objects in an image and the image resolution. They also provide systematic quantitative metrics which show improvement on hallucination benchmarks compared to existing SoTA methods and regular decoding approaches. The authors present a latency analysis of their method compared to other decoding based methods and provide a alternative baseline of FastED which balances accuracy with speed. Overall the paper is well written and conducts systematic experimental analysis."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) While the idea of splitting the original image into N sub-images is well motivated from the pilot study, the approach to utilize the attention weights to re-weight the logits lacks theoretical/empirical justification. Even intuitively, it would make sense to use attention weights for prompts such as \"is there a spoon in the image\" to put a higher emphasis on the sub-image which contains a spoon. However, for generic prompts such as \"describe the image\", the attention weights across the sub-images should be more or less equal. In such cases I am not sure of the reason why it might be useful to weight the logits by the attention scores. \n2) Adding on to 1, the paper lacks experiments/ablations which justify the use of attention weights for the decoding process. Adding ablations which vary the temperature parameter might be useful to understand the impact of attention on results. For example, a useful result to conclude the validity of the attention weights could be setting very high values of temperature parameter leading to sampling from uniform distribution across the sub-images. \n3) The paper does not include experiments on tasks other than hallucination and image captioning. For example on the MME benchmark, out of the 16 tasks, the authors only include results on the existence, count, position, and color sub-categories. It would be good to look at the results on some of the other sub-tasks such as code reasoning, OCR, common-sense reasoning, text translation etc. (as done in the VCD paper) to ensure that this method of decoding does not regress the performance on such benchmarks. \n4) The authors show results for FastED only on CHAIR and LLava-Bench, it would be good to see results of FastED on POPE and MME benchmarks as well \n5) This is already mentioned in the limitations section, but the method only works for MLLM architectures which use a linear projector and does not work with resampler and Q-former based architectures (BLIP-2, Qwen-VL). This limits the generalizability of the technique across a wide-range of model architectures."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce Ensemble Decoding (ED), a method designed to mitigate object hallucination in Large Vision-Language Models by dividing an image into sub-images and combining logit distributions with attention-guided weights to improve accuracy."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024do,\ntitle={Do You Keep an Eye on What I Ask? Mitigating Multimodal Hallucination via Attention-Guided Ensemble Decoding},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ziw5bzg2NO},\nnote={under review}\n}"
},
"abstract": {
"value": "Recent advancements in Large Vision-Language Models (LVLMs) have significantly expanded their utility in tasks like image captioning and visual question answering. However, they still struggle with object hallucination, where models generate descriptions that inaccurately reflect the visual content by including nonexistent objects or misrepresenting existing ones. While previous methods, such as data augmentation and training-free approaches, strive to tackle this issue, they still encounter scalability challenges and often depend on additional external modules. In this work, we propose Ensemble Decoding (ED), a novel strategy that splits the input image into sub-images and combines logit distributions by assigning weights through the attention map. Furthermore, we introduce ED adaptive plausibility constraint to calibrate logit distribution and FastED, a variant designed for speed-critical applications. Extensive experiments across hallucination benchmarks demonstrate that our proposed method achieves state-of-the-art performance, validating the effectiveness of our approach."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Hallucination",
"Multimodal Hallucination",
"Large Vision-Language Model"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/3f400bf61aa224b25e791c8ad685d13d9d53f81b.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/57dae519d290d7fd21147745880a0c0507e61cb2.zip"
},
"title": {
"value": "Do You Keep an Eye on What I Ask? Mitigating Multimodal Hallucination via Attention-Guided Ensemble Decoding"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zjAEa4s3sH | Lines of Thought in Large Language Models | main | Active | LLM;latent space;token trajectories;interpretability;transformer | interpretability and explainable AI | 3;3;6;8 | 3;3;3;3 | 1;3;3;3 | 1;2;3;3 | 2;2;3;4 | 5 | 3 | 2.5 | 2.25 | 2.75 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "In addition to the concerns raised in the weaknesses,\n\n- I find the point being made in the paragraph between lines 195-201 very interesting, but I am then immediately confused by the following paragraph. Could you please clarify this?\n\n- Could you please clarify the paragraph between lines 248-251?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- I find the question posed, and the consequent findings very interesting\n\n- I really appreciate the development of a linear approximation to the distribution of trajectories."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper aims to study the statistical properties of what they call *lines of thought*; trajectories traced by the embedded tokens through the latent space while traversing successive transformer layers. The key observation is that independent trajectories cluster along a low-dimensional manifold, and that their paths can be approximated using a simple dynamics model."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- One of my biggest gripes with the paper is, while I wanted to be excited about the findings, a recurring question that I had was \"why should I care\"? It is my opinion that the authors should invest in a motivation for why the reader should care about the presented findings. It's not clear to me what the takeaways are, or more concretely, how we can utilize the observation to develop better LLMs for instance.\n\n- A second gripe, which perhaps goes hand-in-hand with my first one, is that I often felt the paper could have used a bit more hand holding, or even been organized better. To make this concrete, in Section 3.1, the findings are presented first and then the methodology employed to find them was presented, which to me was a bit confusing. Section 3.2 starts abruptly with the sentence \"The fast decay of the singular values...\". Section 3.4 is titled \"Langevin Dynamics...\" with no mention of Langevin Dynamics. Furthermore, I often found myself having to reference many different figures in different parts of the paper while reading a single paragraph. All of this made it harder to follow the paper closely.\n\n- (Stylistic nitpicking) I think some of the writing maybe embellishes on details that maybe are not very important e.g. the methodology section starts with what I believe to be details that can be abstracted away. I also think the footnotes are somewhat excessively used.\n\n- I'm not really sure what is meant by \"pilot\" in this context. My understanding is that a pilot is \"done as an experiment or test before introducing something more widely.\"\n\n- The authors claim that the pseudo-sentences are \"independent\" (line 171) which is not immediately clear to me given that the pseudosentences are chunks produced from a single piece of writing.\n\n- I find the notation used in the algorithm confusing. Why do we sometimes use parentheses and other times square brackets? Also, wouldn't indexing with t+1=25 at the last iteration be undefined?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Some questions were raised above. In addition\n\n- This work reminds me of neural ODEs. Are there any connections between them and the viewpoint this paper explores?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "- There has been a flurry of works trying to understand the inner workings of LLMs. Therefore, this direction is relevant and interesting.\n\n- This work studies this problem from a unique flow-based perspective by studying the dynamics of the embeddings as they evolve in the layers. The perspective is novel to the best of my knowledge and may potentially lead to a new perspective or algorithm to improve interpretability of LLMs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work studies the dynamics of the embedding space as the input tokens go through the layers of a transformer architecture. The authors break the input prompt into sequences of tokens and study the embeddings of the last token as it propagates through the layers. Using PCA-like projections, it's shown that these embeddings approximately lie in a low-dimensional manifold. Then, by rephrasing these trajectories using linear approximations, the authors attempt to model it via Langevin dynamics which gives rise to the standard Fokker-Planck probability flow. One of the main conclusions is that transformers can be \"distilled\" into few parameters. To validate their assumptions, the authors experimentally study these trajectory on non-language inputs and untrained models and show that their ideas weakly hold. The target audience are people interested in the physics and interpretability of large language models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- While potentially interesting, the paper feels too vague and very high-level without any concrete theoretical or experimental contributions.\n\n- No new theoretical contributions are made, other than standard langevin dynamics formulations of their ideas. The projection to lower dimensions and linear approximations are also somewhat too lossy, as the authors note, so it's not clear how well the observations here actually hold in real life.\n\n- Experiments seem limited to a few models and as the authors note, the last layer seems to form an outlier for the Mistral 7B and the Llama models. While the authors suggest some reasons for this, it's not inherently clear why these happen and bring the central hypotheses into question."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. It is unclear to me what does this sentence (”the uncertainty (or stochasticity) introduced here accounts only for the loss of information of considering the token without its prompt.”) in footnote 8 mean?\n2. For figure 3, why u_2 and u_3 without u_1?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.The idea of this paper is kind of interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the trajectory of token embedding across layers. Inspired by dynamic system, they model the trajectories as diffusive process with a linear drift and a modified stochastic component."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. the Gaussian assumption seems to not hold in early layers.\n2. It is unclear if the same type of paths would hold for larger and more complex model as we already see problems with newer model like LLaMA-3. I think it makes sense to model the intermediate layers with diffusion process but early and last layers might not work not well.\n3. The theory here does not lead to any practical predictions. For example, can you use this model to predict next token?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Questions: (annotated with line number/section)\n125: The vector for every token is projected to form the logits, not just the last vector. For inference, we only care about the logits in the last token'th position, but that doesn't mean the logits must only come from there.\n130: Algorithm 1 is overkill. Just saying \"we cache the activations between each transformer block in the sequence position of the last token over a forward pass for each psuedo-sentence $s_i$\" would be sufficient. The algorithm is not doing anything interesting that the transformer would not already do during a forward pass (other than the caching of activations).\n145: It is advised for ICLR to notate the last token position for layer $t+1$ as $\\mathbf{E}^{t+1}_{:, end}$ rather than $\\mathbf{E}(t+1)[:, end]$ (check the ICLR formatting guide).\n152: The latent space is obviously spanned by a Cartesian basis. Any such space is.\n155: You then revert to the ICLR standard notation for indexing a matrix that you did not use on line 145. Pick one and be consistent.\n175-176: Replace \"Because the Cartesian axes, $\\mathbb{e}_i$, are unlikely to align with trajectories meaningful directions,\" with \"There is no reason a priori we would expect the Cartesian axes to align to meaningful directions for the trajectories of the activations\"\n177: Concatenating the $\\mathbb{x}_k(t)$? Over what dimension? $k$ or $t$ or something else?\n154: Does it makes sense to plot the trajectories such that at each layer we use a different basis? Seems like you'd want to use the same basis throughout the entire forward pass if possible, as then each step along the trajectory is measured in terms of something different. I guess maybe the size of the principal component, with respect to the most important eigenvector is still interesting. This was partially addressed by Fig. 2a. showing that all the dimensions are important at some point during the forward pass, and that the low-dimensional manifold \"wanders around\" the entire latent space.\n\n\nSection 3.2: At first glance this seems a reasonable choice of metric: The KL divergence between the normal output distribution, and the output distribution after having performed dimensionality reduction via SVD. Is this a metric that is defined elsewhere in the literature? Cite if so. I wonder if SVD is the best thing to do here: Is there a better choice of low-dimensional manifold that can only be obtained via a non-linear transformation? Perhaps if an autoencoder was trained that compresses/uncompresses the latent space on forward passes, could you squeeze the activations into an even lower dimensional space while preserving the KL divergence?\nHowever, what isn't clear to me is that how much displacement of KL divergence is a lot, and why $K_0 = 256$ was the value for which \"most\" of the true distribution is recovered. What is \"most\"? What makes a KL-divergence of ~0.7 \"not much\"? It would be good to have some sort of sense of scale what the downstream effects are (e.g. measure the performance on some benchmark before and after throwing away 75% of the activations, and see how much \"dumber\" the model gets.)\n\nFigure 2(c): What precisely is the baseline defined as? The KL divergence between the output distributions conditioned on two unrelated inputs, or with random noise injected into the unembedding matrix, or something else?\n\nFigure 3: It's not clear how to compare the true and extrapolated positions. If the cluster comparison is desired, maybe colour one red, and the other blue, but both have transparency? So the purple region is the overlap? It's hard to see with the grey just covering the blue. I also would have liked a more principled metric to measure how well the model suggested in Equation (1) captures the dynamics of the activations, and what component remains unexplainable.\n\n255-256: The rotation matrix $\\mathbf{R}(t)$ was substituted out for $\\mathbf{U}(t)$, but $\\mathbb{\\Lambda}(t,\\tau)$ was not substituted out for $\\mathbb{\\Sigma}(t+\\tau)\\mathbb{\\Sigma}^{-1}(t)$. I think either substitute both (my preference) or neither.\n\n619: Forgot to bold $\\Lambda$"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* Strong evidence provided to back up the claims made. The authors do a great job of explaining what it is they are measuring and why.\n* The result is fascinating, computationally cheap to obtain, model agnostic, and is empirically backed by measuring the KL-divergence of the distribution after down-projecting the dimensionality.\n* The theoretical justification on the proposed model is well motivated and explained in the appendix.\n* See questions"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors describe a framework that allows tracing the path taken by the activations in latent space during a forward pass on some input text. They note that despite the complexity of computing a forward pass, the trajectory taken is rather simple, and the activations live on a low-dimensional manifold that moves through latent space as a function of the number of layers processed. They provide a stochastic model that describes the trajectory, with vastly fewer parameters than the network itself. The approximation closely agrees with the actual activations, and can be used to extrapolate the behavior of the model."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* Some claims on the similarity of extrapolated token positions are a bit weaker, and are primarily based on visually comparing the extrapolated and actual activations projected down to a low-dimensional space. I would have liked to have seen a more concrete metric of comparison, together with some baseline values to compare against to indicate that they really are close.\n* I would have liked to have seen some applications of this discovery, or at least conjectures for what it could be used for. For example, could one project down the weight matrices into the space provided, and obtain a network that has similar performance and is much smaller, at the expense of off-distribution task degradation? Essentially, a way to distil a pre-trained network down if we only care about performance on one task?\n* I would elaborate on the potential use-case for a continuous interpolation between layers. Is there any sort of intuitive meaning we can assign to this?\n* See questions"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We study token trajectories in an LLM latent space and find that they cluster along a low-dimensional subspace."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024lines,\ntitle={Lines of Thought in Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zjAEa4s3sH},\nnote={under review}\n}"
},
"abstract": {
"value": "Large Language Models achieve next-token prediction by transporting a vectorized piece of text (prompt) across an accompanying embedding space under the action of successive transformer layers. The resulting high-dimensional trajectories realize different contextualization, or 'thinking', steps, and fully determine the output probability distribution. We aim to characterize the statistical properties of ensembles of these 'lines of thought.' We observe that independent trajectories cluster along a low-dimensional, non-Euclidean manifold, and that their path can be well approximated by a stochastic equation with few parameters extracted from data. We find it remarkable that the vast complexity of such large models can be reduced to a much simpler form, and we reflect on implications."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"LLM",
"latent space",
"token trajectories",
"interpretability",
"transformer"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/258bb2ec3fe5e917757a52caa846f5bb352de7da.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Lines of Thought in Large Language Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zjeHLSiNv1 | Ultra-Sparse Memory Network | main | Active | Large language model;sparse model;scaling law | foundation or frontier models, including LLMs | 5;5;6;6 | 4;4;2;4 | 3;4;3;3 | 3;3;3;3 | 3;2;2;2 | 5.5 | 3.5 | 3.25 | 3 | 2.25 | -0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Suggestions:\n* Focus on the high-level motivation before diving into details of methodology and experiments.\n* Move the related work up front in the paper, and put in background required for your method. Include MoEs, PKM, Tucker decompositions, Megatron, any background relevant to IVE and MCS, etc.\n* Fix readability of all the text in the figures.\n* Figure 2 (and to some extent Figure 3) are poorly labelled/captioned, and it's largely left to the reader to figure out what each side represents and how it's related to the method. At a minimum label the subfigures or describe in caption what left/right are. Would also suggest positioning floats at top of page.\n* When including a large table of metrics (i.e. in Table 1), it's helpful to the reader to explain whether lower or higher is better for each, e.g. with an arrow in the header."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "* Memory access is perhaps the main bottleneck in inference for contemporary hardware, and so this is a well-motivated and impactful direction to be exploring.\n* The results appear to show impressive improvements over Mixtures of Experts (MoEs) in inference time and memory access while maintaining validation loss/perplexity. \n* The authors methodology is detailed and thoughtful, for example deriving the correct initialization for their method (which would appear to also apply somewhat to PKM).\n* Many of the figures/graphics themselves are good in design and would be helpful to understanding the methodology if it wasn't for the other serious issues with readability and captions (see below).\n* Changes to PKM (before the UltraMem specific changes) are listed in 2.2 as the bag of tricks that improve PKM's performance, and are separate from the author's proposed UltraMem structure. It would be beneficial if more papers took such an approach to be clear about the differences between improved training methodology for baseline methods and a newly proposed method. Ablation in particular is nice about being clear how these changes improve baseline PKM performance.\n* Real-world inference and memory access results on GPU hardware.\n* Large-scale experiments, in particular results are evaluated on various model sizes, from 151M param up to 1.6B. FLOPS for the sparse models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Motivated by the high memory access cost bottleneck of Mixtures of Experts (MoEs) for Large Language Model (LLM) inference, the authors propose an alternative methodology, UltraMem, based on Product Ley Memory (PKM) which improves memory access above MoEs, but suffers in performance compared to them. The authors make several observations to improve the vanilla PKM methodology, basing their method on this baseline. The authors then not three problems with the PKM architecture, notably that it does not scale to large value sizes, product key decomposition biases retrieval, and unbalanced GPU computation/communication for large models. The authors propose to decompose the large memory layer of PKM into many smaller memory layers distributed across layers, allowing execution of memory and transformer layers to overlap. The authors propose to use a Tucker decomposition instead of the product key decomposition of PKM. Finally, rather than explicitly maintaining a large memory table for values, the authors propose a virtual memory approach. The authors evaluate UltraMem across several tasks compared to dense and an MoE equivalent model, and compare the performance across these tasks, along with the inference and memory accesses for each, demonstrating significantly faster inference with similar performance to an MoE model."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* Overall the paper reads poorly, more as a collection of experimental details and results rather than a cohesive story. I know that's frustrating and perhaps vague feedback to get as an author but I think it's really important to point out as it makes the paper quite hard to read as it is, and reduces the impact of the author's work. To be clear, I do appreciate the experimental and methodological details themselves. However, I believe the authors could do the work much more justice by taking a step back and framing their research story better. In particular, I would recommend focusing first on section 3 as higher-level motivation of the method before diving into building on top of PKM. \n* All the figures containing the majority of results, and some of the methodology figures (Fig 4) are **far** too small. They are so small that the figures are completely unreadable on paper. I measured the font sizes in the figures out of curiousity, and most of the fonts are 2-3pt!\n* Given that the proposed method is largely based on PKM, PKM should be a baseline in the results. The authors mention PKM's \"..effectiveness is significantly inferior to that of MoE.\", without explaining on what measures this is true, or giving the reader to make that judgement in the results. I don't actually doubt the authors on this, but it must be demonstrated in the experiments.\n* While as noted above it's great that the \"bag of tricks\" for PKM is listed, along with the ablation, it also would seem important that the performance of the improved PKM should be evaluated as a baseline over just vanilla PKM, and compared with the improvements from the author's proposed method.\n* The related work is **far** too short given all the work done in this field, much of which is referred to by the authors and built upon in the proposed method. To be honest I'm never a fan of related-work at the end of the paper, but I feel like this paper in particular would have greatly benefited by the related work coming before the methodology rather than at the end of the paper, as instead of having to bring up related work throughout the paper as it is built upon, it could have just been earlier summarized and then referenced throughout, allowing the story to focus on the methodology more clearly.\n* Relies too much on validation loss/perplexity for the evaluation of whether generalization performance is maintained in the main paper/figures. The authors do have six other metrics for tasks in Table 1 and figures in the appendix for these tasks (which they should explain better in the paper/background rather than quickly citing all of them in a list). Given that the authors are in a sense proposing a form of sparsity, and given the results of Jaiswal et al. for pruning (Compressing LLMs: The Truth is Rarely Pure and Never Simple, ICLR 2024) I believe it's more important to focus on the performance in task-specific measures."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "see weakness."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper introduces the UltraMem architecture, which demonstrates innovation by significantly reducing inference latency while maintaining computational efficiency. \n2. The paper presents extensive experiments comparing the performance of UltraMem with traditional models (such as MoE and dense models), verifying UltraMem’s advantages in inference speed, memory access costs, and scalability. \n3. The experiments show that UltraMem’s memory access volume grows much more slowly with batch size compared to MoE, enhancing its practical utility."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a novel deep learning architecture called UltraMem, designed to reduce memory access costs during the inference process of large language models, thereby improving inference efficiency. The core innovation of UltraMem lies in its introduction of an ultra-sparse memory layer, which allows the model to activate only a small number of necessary memory units when processing tasks. This approach reduces the number of memory accesses, effectively lowering inference latency. Specifically, , UltraMem surpasses MoE with the same parameters and computation as model capacity increases."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper lacks references and descriptions of the architecture diagrams within the main text. For example, each step in Figure 4 is not referenced in the text. Additionally, certain terms in the architecture diagrams, such as “fetch values,” are not explained in the text, making the paper difficult to follow.\n2. The paper claims that the proposed UltraMem method has stronger scalability. However, UltraMem was only tested on models with 151M, 680M, and 1.6B parameters, without experiments on larger models, such as the 6.5B model or beyond. This parameter range is insufficient to fully verify UltraMem’s scalability in large-scale models. It is recommended to extend the experimental scale to explore UltraMem’s performance and scalability at extremely large parameter sizes (e.g., 10B and above).\n3. Several modules proposed in the paper, such as Implicit Value Expansion (IVE) and Multi-Core Scoring (MCS), collectively enhance UltraMem’s performance. However, the independent effects of each module are not adequately evaluated. For instance, Table 2 provides ablation experiments but lacks detailed analysis of the independent effects of key modules like IVE and MCS across different tasks or sparsity settings. It is recommended to expand on Table 2 by adding experiments that assess the independent impact of each optimization module on model accuracy.\n4. Figure 4 illustrates the process of Tucker decomposition but lacks analysis of how different decomposition parameters, such as rank r, affect model accuracy. To clarify the impact of different Tucker decomposition configurations on model performance, it is recommended to include more comprehensive ablation experiments to quantify Tucker decomposition’s specific role in UltraMem. Additionally, experiments should be added to analyze the effects of different values of E in the IVE method on experimental results."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "here are some minor writing issues, such as in Line 094, where it states 'introduce the origin large memory.' These should be easy to address.\n\nMy main questions and observations are:\n\n1. **Comparison with Fine-Grained Mixture of Experts**: How does this method compare to fine-grained Mixture of Experts as in [1], where the hidden dimension is split between experts?\n2. **Training Speed**: How does UltraMem’s training speed compare to that of MoE?\n3. **Placement of UltraMem Blocks**: On what basis were the positions of the UltraMem blocks within the architecture chosen?\n4. **Figure Quality**: Improving the quality of the figures and plots would enhance clarity.\n\n**Suggestions**:\n\n1. It would be beneficial to conduct experiments with popular open-source models like Llama-3.2-1B, Mistral, or others.\n2. Since Section 3, \"Why UltraMem Instead of MoE,\" discusses UltraMem’s advantages over MoE, it would be valuable to include experiments demonstrating that UltraMem consistently outperforms MoE in real-world scenarios, using a comprehensive baseline model for comparison.\n3. Additionally, an ablation study comparing UltraMem's memory efficiency and performance against MoE based on [1] would further strengthen the analysis if applicable.\n\n[1] *Scaling Laws for Fine-Grained Mixture of Experts*."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This paper addresses an interesting and impactful problem with practical applications in large language model (LLM) research.\n\n- The authors propose an alternative, memory-efficient approach to achieving the performance of Mixture of Experts (MoE) models.\n- They introduce a sparse memory access mechanism and a 2D Product Key Memory structure, which restricts memory access to only the most relevant slots, enhancing efficiency.\n- **Scalability**: The use of Tucker Decomposition improves the scalability of the method, allowing it to handle larger models effectively.\n- **Experiments**: The experiments demonstrate promising results, achieving comparable performance to MoE models of similar scale while maintaining greater memory efficiency."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors propose UltraMem, a memory-efficient method designed to replace MLP layers or Mixture of Experts (MoE) in Transformer architectures, particularly for large language models. UltraMem builds on a refined Product Key Memory (PKM), utilizing a 2D grid structure with row-column key decomposition to reduce retrieval costs. Additionally, it employs Tucker Decomposition to further minimize memory usage and computational overhead. Experiments across various language tasks demonstrate UltraMem’s promising effectiveness and efficiency."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Although the idea presented in this paper is novel, the experimental performance assessment shows some limitations:\n\n1. **Lack of a Proper Baseline**: The authors do not use a well-established MoE baseline to evaluate the performance of their method. Since UltraMem is proposed as a memory-efficient alternative to MoE, it would be beneficial to compare it against real-world MoE implementations from existing open-source models, providing a more comprehensive analysis.\n\n2. **Limited Experiments with Popular Models**: The experiments primarily use custom models, limiting the generalizability of the results. It would be valuable to assess UltraMem’s effectiveness on popular open-source models like Llama, Mistral, or others.\n\n3. **Limited Analysis of Sparsity Levels**: The paper could benefit from a deeper investigation into the effects of different sparsity levels on memory efficiency and performance, as this is a key component of UltraMem’s approach."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. The proposed IVE aims to reduce memory access. Given that it handles the general form of matrix multiplication, could it also be applied to MoE or other methods that involve extensive memory access?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The techniques proposed in this paper are novel. On one hand, the paper thoroughly studies the techniques in PKM, offering valuable empirical insights for future research. On the other hand, it integrates algorithms with memory management to address the memory access issue. \n2. The paper includes an efficiency analysis focused on memory access.\n3. Experimental results show that the method significantly outperforms existing approaches on large models, being up to six times faster than MOE at the same scale."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel layer designed to increase model size efficiently. Specifically, the approach refines the existing PKM method and incorporates TDQKR as a sparse method to enhance performance. Additionally, it proposes implicit value expansion to address memory access challenges in large models. A comparative analysis of memory access between the proposed method and MOE is also provided. Experimental results demonstrate significant performance improvements."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The writing could be improved, particularly since it introduces concepts beyond the domain of algorithms. These should be better explained as background information. For example, the implicit value expansion technique is somewhat confusing when the paper discusses virtual and physical memory, and could benefit from further clarification.\n\n2. The experimental results in the main paper do not fully support the claims made in the methods section. First, while the paper asserts that the method improves PKM, there is no direct comparison except for validation loss. Second, although the paper claims that its memory access method outperforms MOE, this is only demonstrated in the appendix. The main table merely shows FLOPs and model parameters, which do not sufficiently illustrate the method's advantages (including the running time).\n\n### Minor:\n1. The dimensions in Equation (6) are inconsistent."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Ultra-sparse memory network significantly enhances the efficiency and scalability of large language models while maintaining performance. Compared to Mixture of Experts, it has a significant advantage in inference speed."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024ultrasparse,\ntitle={Ultra-Sparse Memory Network},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zjeHLSiNv1},\nnote={under review}\n}"
},
"abstract": {
"value": "It is widely acknowledged that the performance of Transformer models is exponentially related to their number of parameters and computational complexity. While approaches like Mixture of Experts (MoE) decouple parameter count from computational complexity, they still face challenges in inference due to high memory access costs. This work introduces UltraMem, incorporating large-scale, ultra-sparse memory layer to address these limitations. Our approach significantly reduces inference latency while maintaining model performance. We also investigate the scaling laws of this new architecture, demonstrating that it not only exhibits favorable scaling properties but outperforms traditional models. In our experiments, we train networks with up to 20 million memory slots. The results show that our method achieves state-of-the-art inference speed and model performance within a given computational budget."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Large language model",
"sparse model",
"scaling law"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/d1b9689dedf5487b54ee9a63cb6b6a91ed230ec9.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Ultra-Sparse Memory Network"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zkGxROm7D3 | State & Image Guidance: Teaching Old Text-to-Video Diffusion Models New Tricks | main | Active | Text-to-Video Generation;Diffusion Models;Diffusion Guidance;Zero-shot Image-to-Video Generation | generative models | 5;5;5 | 3;3;5 | 2;3;3 | 2;2;2 | 2;2;3 | 5 | 3.666667 | 2.666667 | 2 | 2.333333 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the weaknesses section"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "# Strengths\n\n- The proposed framework is training-free, and indeed achieves better motion dynamics for the mentioned types of prompts\n- The proposed framework outperforms the mentioned baselines"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "# Summary\n\nThe paper proposes a training-free framework for generating better motion dynamics and adding image conditions with existing pre-trained T2V models. Extensive experiments demonstrate the effectiveness of the proposed framework"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "# Weaknesses\n\n- In Sec. 3, the definition of diffusion models might be incorrect\n- The idea of state guidance seems similar to the deforum-like technique used in the stable diffusion user community\n- While the guidance schedule seems reasonable, the paper does not mention how it was designed/selected\n- Entries in the proposed dynamic scenes benchmark consist of three states tailored for the proposed framework, resulting in inconsistent experiments settings when compared with other baselines which does not support this type of inputs. In that case, is unclear how reliable the proposed benchmark is\n- The paper does not mention how generated results were selected. Considering diffusion models can generate various of results from the same input conditions from different seeds, it would be better to report mean+std for each metric and report the success rate of each generation\n- For II2V experiments, the paper does not compare with SEINE\n- The scale of user study seems relatively limited and the design of user study seems flawed: \"more changes\" in question 2 does not necessarily indicate the result is better in terms of visual quality. The results with flickering artifacts and incorrect/unfavourable color changes could also be considered as \"more changes\"\n\n# Other comments (not weaknesses)\n\n- The paper coined a new term II2V, which is actually frame inbetweening/interpolation"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "In addition to the key information missing mentioned above, I have several questions related to the details of the paper:\n\n1. How to handle different combinations of text and image prompt? For example, what if we have the triplet description and only the end frame of the generated image? How is this case different from the case where we have the triplet description and only the first frame of the generated image?\n\n2. How is the method able to handle the morph transform even although the pretrained model is rarely trained on videos with morphism since it is not common? More discussion on this would be appreciated.\n\n3. Have the authors try a more detailed caption vs. the simple ones used in the paper? Will that lead to better motion?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The proposed method is practically useful given that:\n\n* It enables the transformations that would be very difficult to model with pretrained models, such as morph transformation, drastic texture changes over frames. \n\n* The method is training-free and does not require text-video pairs with the target motion patterns, which is hard to collect in scale. The training free method achieves comparable performance as the compared training-based method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a novel video generation guidance method capable of producing videos with drastically different cross-frame descriptions, such as texture changes, morph transformations, and large motions. The core innovation of the method is the state triplet, which decomposes the video into different phases, each with its own distinct description. The state triplet can be generated either from a large language model (LLM) or manually. The proposed method is training-free and has been evaluated on a new video benchmark, achieving performance comparable to methods that require task-specific training."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The T2V limitation mentioned in L202 is not convincing enough. The prompts used in the paper are relatively simple, consisting of only one or two sentences. Some related works, such as CogVideoX [1], utilize DiT-based structures and demonstrate that detailed prompts significantly improve video generation quality, both in appearance and motion. Therefore, if we consider a scenario with both detailed video captions and a DiT-based model that relies less on explicit per-frame modeling due to (1) temporal compression in the tokenizer, and (2) stronger spatial-temporal modeling capabilities (i.e., cross-frame modeling rather than per-frame), the limitation highlighted for the T2V model becomes less relevant. This is because we would not be restricted by a limited T2I model (point 2) and would benefit from enhanced spatial-temporal modeling (point 1). \n\n- The paper lacks key information on how the transition order is maintained. While Eq. 1 models the joint conditional distribution given the prompt triplet, it does not specify how the generated images are constrained to follow the prompt order: initial -> transition -> final stage. Ensuring this sequential alignment is crucial for achieving controllability and realism in the generated video.\n\n- As mentioned in the limitation section in the supplementary, the method introduces additional hyper-parameters, such as the guidance scale at for the triplet states. Tweaking those hyper-parameter would be a case-specific effort and paper does not propose a principled approach for estimating/optimizing those hyper-parameters. \n\n- As mentioned in the first item, there are strong models taking much more descriptive prompt as input for video generation. However, the paper does not include the comparison with those methods. The lack of this comparison makes the claim about the T2V limitation and the proposed method less convincing.\n\n[1] CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer. Yang, Zhuoyi and Teng, Jiayan and Zheng, Wendi and Ding, Ming and Huang, Shiyu and Xu, Jiazheng and Yang, Yuanming and Hong, Wenyi and Zhang, Xiaohan and Feng, Guanyu and others. arXiv preprint arXiv:2408.06072"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Can the authors elaborate on the comparison to \"Make Pixels Dance\"?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The extensions are reasonable and the various experiments does show nice videos produced by the system.\n\nIn addition, a new dataset is introduced that, hopefully, will help future contributions to the field."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper describes a method for Text To Video (T2V) generation.\nThe method proposes two extensions: State and Image guidance.\nState Guidance uses state triplets (initial, current, last) to help T2V generate the proper video frame.\nImage guidance injects noise in the early stages of the diffusion model to steer it in the right direction.\nThis is then used to: generate more dynamic video sequences, as well as zero shot video generation from a single image, and a video interpolating between two images."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper feels rushed. (The caption of the teaser figure misplaced the text for sub-figures (B) and (C)).\n\nI wonder what is the novelty of the proposed method given the \"Make Pixels Dance\" (CVPR'24) paper.\nThey, too, use a triplet state representation to encourage better video synthesis. \nYet, I could not find a direct comparison. Can the authors explain why?\n\nThere are many results and one must appreciate the work done by the authors, but it is extremely difficult to follow the experimental results and appreciate the contributions. For example, \n\n1. Please add a reference to the different methods shown in the various tables.\n2. I'm not sure the ablation experiments should appear in the main text. \n3. The supplemental material is difficult to navigate. There are many folders and no easy way to navigate and compare the different results presented there.\n4. Table 3: The boldface numbers are confusing as they only refer to the method without SG. Yet, in almost each column there is a better alternative, so it's difficult to judge the overall quality of the results.\n5. Table 5: It is confusing to compare VC2+IG with three thresholds to TI2V-Zero and then highlight in bold different measures for different thresholds."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We present two new sampling methods for Text-to-Video (T2V) diffusion models that enhance pre-trained models, allowing for dynamic scene generation and zero-shot image-to-video and image-image-to-video generation (based on the first and last frames)."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024state,\ntitle={State \\& Image Guidance: Teaching Old Text-to-Video Diffusion Models New Tricks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zkGxROm7D3},\nnote={under review}\n}"
},
"abstract": {
"value": "Current text-to-video (T2V) models have made significant progress in generating high-quality video. However, these models are limited when it comes to generating dynamic video scenes where the description per frame can vary dramatically. Changing the color, shape, position and state of objects in the scene is a challenge that current video models cannot handle. In addition, the lack of a cheap image-based conditioning mechanism limits their creative application. To address these challenges and extend the applicability of T2V models, we propose two innovative approaches: **State Guidance** and **Image Guidance**. **State Guidance** uses advanced guidance mechanisms to control motion dynamics and scene transformation smoothness by navigating the diffusion process between a state triplet <initial state, transition state, final state>. This mechanism enables the generation of dynamic video scenes (Dynamic Scene T2V) and allows to control the speed and the expressiveness of the scene transformation by introducing temporal dynamics via a guidance weight schedule across video frames. **Image Guidance** enables Zero-Shot Image-to-Video generation (Zero-Shot I2V) by injecting reference image into the initial diffusion steps noise predictions. Furthermore, the combination of **State Guidance** and **Image Guidance** allows for zero-shot transitions between two input reference frames of a video (Zero-Shot II2V). Finally, we introduce the novel **Dynamic Scene Benchmark** to evaluate the ability of the models to generate dynamic video scenes. Extensive experiments show that **State Guidance** and **Image Guidance** successfully address the aforementioned challenges and significantly improve the generation capabilities of existing T2V architectures."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Text-to-Video Generation",
"Diffusion Models",
"Diffusion Guidance",
"Zero-shot Image-to-Video Generation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/4e4a13a6339e2ef928aa1dcce869c49461d6fb62.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/be0096fe8500c1a028622ddfcbae82be51f57cde.zip"
},
"title": {
"value": "State & Image Guidance: Teaching Old Text-to-Video Diffusion Models New Tricks"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zkMRmW3gcT | Elucidating the Design Space of Language Models for Image Generation | main | Active | Image generation;Large language model;Generative model | foundation or frontier models, including LLMs | 3;3;5;5;5 | 4;3;3;3;3 | 2;2;3;3;2 | 2;2;3;2;2 | 2;2;2;2;3 | 4.2 | 3.2 | 2.4 | 2.2 | 2.2 | -0.612372 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Could the proposed design principles for image generation apply to other visual tasks, such as video generation?\n\nHow does the model handle diverse visual patterns, such as textures or irregular structures, compared to more common AR or diffusion models?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper examines different components of language models in image generation, such as tokenization methods, model scalability, and sampling strategies, providing insights into optimizing language models for visual tasks.\n\nThe ELM model integrates binary autoencoders for effective tokenization and AR models for scalability and performance. Extensive experimentation validates the authors' design choices, as ELM achieves high performance across various model sizes."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the application of language models for vision generation tasks by exploring differences between image and text token distributions, the training dynamics of autoregressive models versus masked language models (MLMs), and the efficacy of different image discretization approaches. The authors propose a new model, ELM (Elucidated Language Model for Image generation), which combines AR modeling with Binary Autoencoder (BAE) for discretization. Key insights include the advantage of AR models for capturing image structures, the benefits of BAE in reducing computational costs and improving performance, and the ability of AR models to learn effective image patterns without inductive biases."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While the paper positions ELM as an alternative to diffusion models for image generation, more direct comparisons with these models could strengthen the evaluation. How does ELM compare in performance and efficiency to diffusion models in a similar setting?\n\nWhile the study uses ImageNet as a benchmark, it doesn’t explore a broader range of datasets or domains that might reveal limitations in model generalization. How might the model’s performance be affected by training on datasets with higher resolution images or more complex scenes than ImageNet provides.\n\nHow robust is ELM in scenarios requiring conditional generation with complex prompts or context?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "It would be great if the authors can address the weakness mentioned above, especially a more thorough discussion and comparison with MAR."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.\tResearch questions proposed by this paper, i.e. how to design language models for image generation, is a very timely and important subject to investigate. \n\n2.\tThis paper presents substantial efforts in searching for better alternatives compared to the dominating tokenization approach that uses a pretrained VQGAN model. \n\n3.\tTheir proposed ELM achieves SOTA performance on class-conditional ImageNet 256X256 image generation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this work, the authors investigate the effects of different modeling choices for using language models for image generation. Specifically, they compare two tokenizers (VQGAN and BAE), two loss functions (AR and MLM), vocabulary designs, and different sampling strategies. Based on this exploration, the authors propose the ELM model and achieve strong performance on the class-conditional ImageNet 256X256 image generation task when compared with competitive AR baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While this paper explores an important research direction, the attempt is nevertheless lacks rigor and stronger experimental validation. Specifically, my main concerns include the following: \n\n1.\tFailure to compare with continuous-valued tokenization: As the most competitive baseline shown in this paper, MAR, which uses continuous-valued tokenization, achieves very comparable performance (FID 1.55) in comparison to ELM (FID 1.54). However, the authors did not include continuous-valued tokenizers in their design space.\n\n2.\tFailure to compare with masked autoregressive loss: Similarly, MAR uses masked autoregressive loss, which is also not included in their design space. \n\n3.\tUnconvincing comparison with MAR: Note that MAR achieves almost identical performance as the best performing ELM with only half of the parameter size. And when comparing the two models with similar parameter size, MAR shows very clear advantages over ELM. This shows the minor improvement that ELM brings – especially with the discussion of scaling laws, one can imagine scaling up MAR further can potentially also improve the performance, and therefore MAR can potentially outperform ELM if scaled up to 2B parameter size.\n\n4.\tFailure to explore the effect of token orderings and token types: Even if the authors intend to only compare discrete tokens, they have missed several comparison points such as the ordering of the tokens (e.g. scanline, zig-zag, etc) and the types of the tokens (e.g. image patches, image scales, etc). These design choices are arguably equally important and underexplored as the ones presented in the paper. \n\n5.\tContradicting discussions and conclusions: In Section 3.1, the authors noted that training loss is not a good indicator for the model performance. However, in later sections, e.g. when analyzing the scaling laws, the authors still use the training loss as the metric for evaluation. \n\n6.\tA few claims: \na.\tIn Section 3.1, the authors claim that “image data lacks the inherent structure” from their KL divergence analysis with uniform distribution. Not only does this analysis have no connection to the conclusion, the claim itself is also invalid without further assumptions (e.g. [1]). If the authors had explored different token types (e.g. the scale tokens), they may also observe different divergences and may reach different conclusions.\nb.\tIn Section 3.4, the authors use the visualized attention maps to study the ability to learn global v.s. local information. However, this kind of visualization does not necessarily accurately reflect the actual functionality of the network according to [2]. \n\n7.\tFailure to explore the combinatorial design space: The authors fix the tokenizer choice when comparing different loss functions, tokenization designs and sampling strategies. However, they fail to consider the combinations of these choices with the other tokenizer choice. For example, it is possible that VQGAN tokens + MLM behaves differently with AR and MLM, or even potentially performs better than BAE tokens + AR. Given that VQGAN is a popular choice for many AR image generation models, it would be valuable to analyze behaviors with VQGAN tokens.\n\nA few minor suggestions: \n1.\tLine 164, the notation of the conditional probability is very confusing and not conventional\n2.\tAlmost all plots have fonts that are too small and therefore hard to read\n\nOverall, as a paper that claims to “elucidate the design space of language modeling for image generation”, the explored design space in this paper is not very comprehensive and the authors miss multiple obvious marks that have high likelihood of further improving the performance. \n\nReferences:\n[1] M. A. Turk and A. P. Pentland, \"Face recognition using eigenfaces,\" CVPR 1991.\n[2] Wen, Kaiyue, et al. \"Transformers are uninterpretable with myopic methods: a case study with bounded Dyck grammars.\" NeurIPS 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please see weakness section for more context.\n\n* What are new findings from Section 3.2 and 3.5 compared to [[Yu et al., 2024](https://arxiv.org/pdf/2310.05737)]?\n* Regarding Section 3.1:\n * In Table 1, how is \"bigram\" distribution obtained from the image tokens? Specifically, how is \"consecutiveness\" defined? Raster-order?\n * The sentence in line 216 \"the randomness in image token distribution implies that image generation doesn't depend on strict sequential patterns\" is not well justified. Unigram distribution being uniform only suggests that the frequency of items in vocabulary is uniformly distributed, but it is unclear how unigram distribution could imply anything about the sequential pattern.\n * Ultimately, what is the take-home message of the section 3.1? Image and text tokenizers have different distribution and image tokenizer seems to have more randomness, so it is more difficult. And? What design choice consideration should we be making knowing that the difference in distributions?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The paper has clearly laid out some important design choices of building language model based image generation model. The recipe provided in this paper could be useful to the community."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper investigates the design space of image generation using language models, whose space includes the type of image tokenizer (e.g., vector-quantized or binary-value quantized auto-encoders, decomposition), type of language model (e.g., autoregressive or masked language model), scaling behavior (e.g., learning vs model sizes, vocabulary size vs model size). With extensive study, the paper suggests an optimal combination of design choices, leading to a strong image generation performance on class-conditional image generation on 256x256 ImageNet, whose performance is on par with existing state-of-the-art methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* Overall, I didn't find the findings of the paper new, surprising, or significant over those from previous works.\n\n* The design space elucidated in this paper is mostly from existing works. While it is great that the paper has compiled them into a single paper, the contribution is very limited by nature. Furthermore, the optimal design choices made in this paper is not very different from the previous findings, which limit the contribution of the paper to confirm what is already known.\n\n * Section 3.2 Tokenizer choice: VQGAN vs BAE is repetition of the study conducted by previous work [Yu et al., 2024]\n * Section 3.5 Vocabulary design has been studied in [Yu et al., 2024] (Section 3.1, paragraph \"Token factorization for efficient prediction\") and confirmed decomposition into two subcodes being generally optimal.\n\n* Section 3.1 Image generation vs text generation, which compares the property of image and text tokenizers, does not seem rigorous and conclusive. \n * In Table 1, how is \"bigram\" distribution obtained from the image tokens? Specifically, how is \"consecutiveness\" defined? Raster-order?\n * Token distribution analysis seems misleading. For example, the sentence in line 216 \"the randomness in image token distribution implies that image generation doesn't depend on strict sequential patterns\" is not well justified. Unigram distribution being uniform only suggests that the frequency of items in vocabulary is uniformly distributed, but it is unclear how unigram distribution could imply anything about the sequential pattern.\n * Ultimately, what is the take-home message of this section? Image and text tokenizers have different distribution and image tokenizer seems to have more randomness, so it is more difficult. And? What design choice consideration should we be making knowing that the difference in distributions?\n\n[Yu et al., 2024] [LANGUAGE MODEL BEATS DIFFUSION — TOKENIZER IS KEY TO VISUAL GENERATION](https://arxiv.org/pdf/2310.05737)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- How does the inference speed of ELM compare to that of diffusion models?\n- Will the models/code be open sourced?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper explores many different dimensions of the LM setup, ablating across both the stage 1 tokenizer and the stage 2 LM.\n- Section 3 of the paper provides nice visualizations and sheds some light on how LMs learn the image generation task (which is not entirely intuitive, especially for the autoregressive nature).\n- This paper explores autoregressive generation, which has distinct advantages over diffusion models (for example, leveraging LLM infra and systems advances). Diffusion models have been the primary focus of the image generation community as of late, so further exploration into this paradigm is valuable."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper conducts a thorough analysis on the use of LMs for image generation. This approach usually involves two stages: (1) training an image quantizer to convert image patches into discrete tokens and (2) training a language model to model this token distribution. While this approach is not new, this paper dives deeper into the design choices used in this generation setup, providing insights into the architectural and hyperparameter choices that influence this regime. They demonstrate strong results on ImageNet generation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The evaluation results are only on 256px ImageNet images, which is saturated at this point. It would be more valuable if the results could be demonstrated on a different task, e.g., text-to-image generation on MS-COCO, or at least on other conditional generation datasets (e.g., CelebA or FFHQ). This would also ensure that the findings transfer to other datasets/tasks.\n- The paper suggests that ELM can be used to generate any size images, but there doesn’t seem to be evaluations done at higher resolutions. How does the model compare to 512px images, for example? Is it significantly better to use ELM to generate at 512px, compared to resizing 256px generated images?\n- All of the ablations are conducted on model architectures, but recent trends in deep learning suggest that data is more important than architecture. How do the LM models compare against diffusion models in terms of data efficiency/scale? This would be a valuable ablation to run, to find out if either model is more data efficient, or if either LM or diffusion models are better at small data regimes."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Overall, what do you think is the best strength of using LLM for visual generation than diffusion models?\n\nFor the tokenizer selection, have you considered using the VAE?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper studies the fundamental differences between the token distributions of discretized images and text. Which is an interesting direction.\n\nThe paper demonstrate the effectiveness of AR models and its potential on image generation on the ImageNet 256×256 benchmark."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This explores the potential of adapting autoregressive (AR) language models for image generation tasks and study its design spaces. The study investigates how these models can be optimized for image generation, considering the differences between text and image data. The authors identify key challenges, such as the greater randomness in image tokens compared to text tokens, which complicates the training process.\n\nA key focus of the paper is on analyzing the design space of language models for vision generation, including choices in tokenization (e.g., VQGAN vs. BAE), model scalability, and vocabulary design. Through extensive experiments, the authors find that BAE-based tokenization outperforms traditional vector-quantized approaches, and that AR models exhibit better scalability and image generation capabilities than masked language models (MLMs). The study also highlights the importance of model size, showing that larger AR models are better at capturing global context, while smaller models struggle with this aspect.\n\nThe main contribution is a comprehensive analysis of the design space for applying language models to image generation. The authors propose the Elucidated Language model for iMage generation (ELM), which achieves state-of-the-art performance on the ImageNet 256×256 benchmark. This work aims to inform future designs of language models for visual content creation and multi-modal inference."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper is not presented well and very empirical. I would suggest skip introducing too much details about preliminary works such as VQGAN, BAE without sharing insights designing them for image generation.\n\nThe main part of the paper is mainly about experimental comparisons but there are not comprehensive experiments made to support the claims.\n\nOnly image quality is reflected in figures such as Figure 8, I would suggest also adding text prompts and also possibly give more complex cases to compare the language understanding ability of autoregressively based LM for visual generation.\n\nThe fonts are too small in figure 4,5,6,7.\n\nOnly AR models are used as baselines in this paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This work fully explores the use of language models for image generation, analyzing their optimization behavior, investigating tokenization, sampling strategies, and model scalability to achieve optimal performance."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024elucidating,\ntitle={Elucidating the Design Space of Language Models for Image Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zkMRmW3gcT},\nnote={under review}\n}"
},
"abstract": {
"value": "The success of autoregressive (AR) language models in text generation has inspired the computer vision community to adopt Large Language Models (LLMs) for image generation. However, considering the essential differences between text and image modalities, the design space of language models for image generation remains underexplored. We observe that image tokens exhibit greater randomness compared to text tokens, which presents challenges when training with token prediction. Nevertheless, AR models demonstrate their potential by effectively learning patterns even from a seemingly suboptimal optimization problem. Our analysis also reveals that while all models successfully grasp the importance of local information in image generation, smaller models struggle to capture the global context. In contrast, larger models showcase improved capabilities in this area, helping to explain the performance gains achieved when scaling up model size. We further elucidate the design space of language models for vision generation, including tokenizer choice, model choice, model scalability, vocabulary design, and sampling strategy, through extensive comparative experiments. Our work is the first to analyze the optimization behavior of language models in vision generation, and we believe it can inspire more effective designs when applying LMs to other domains. Finally, our elucidated language model for image generation, termed ELM, achieves state-of-the-art performance on the ImageNet 256×256 benchmark."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Image generation",
"Large language model",
"Generative model"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/8a5d60a2ec73c0abe5e6936b7fb8476e63df4946.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Elucidating the Design Space of Language Models for Image Generation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zkNCWtw2fd | Synergistic Approach for Simultaneous Optimization of Monolingual, Cross-lingual, and Multilingual Information Retrieval | main | Active | Information Retrieval;Multilingualism and Cross-Lingual NLP;Question Answering | applications to computer vision, audio, language, and other modalities | 3;3;3 | 4;3;4 | 3;2;2 | 1;2;2 | 2;2;3 | 3 | 3.666667 | 2.333333 | 1.666667 | 2.333333 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "None."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Addresses a relevant challenge in multilingual information retrieval.\n2. Provides comprehensive experimental validation across multiple benchmark datasets (XQuAD-R, MLQA-R, MIRACL)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a hybrid batch training approach for multilingual information retrieval by combining monolingual and cross-lingual training data. The core methodology relies on mixing different types of training data using probability weights α and β. While the implementation is straightforward, the novelty of the contribution is limited."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The primary contribution merely combines two existing training approaches with probability weights, presenting a straightforward and obvious solution.\n2. The paper employs translated QA pairs as data augmentation, creating an unfair comparison with baseline methods that do not utilize this advantage."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Related to weakness1, could this proposed method be extended to other tasks in addition to QA?\n2. Can you discuss my weakness 2.\n3. Can you discuss my weakness 3."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper shows that two standard batching strategies are complementary for information retrieval tasks, as the combination of them shows improvements."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies information retrieval tasks where monolingual, cross-lingual, and multilingual setups are examined. The paper studies different batch sampling approaches at the training time without modifying existing training loss (e.g., contrastive learning loss) or model architectures. Specifically, the paper argues that existing approaches either use (i) monolingual batching where the languages of query and documents are matched, but they can be of different languages, or (ii) cross-lingual batching where the languages of query and documents are different. Based on this, the paper proposes hybrid batching, which is the mixing of these two batching methods.\n\nExperiments are conducted on two base models (XLM-R and LaBSE) and evaluated on two tasks (XQuAD-R, MLQA-R, MIRACL). To train systems with data in various languages, the paper employs in-house machine translation to translate existing training corpora (described in Section 3.1). The experimental results show that hybrid batching, generally, outperforms monolingual-only and cross-lingual-only in a range of setups, including monolingual, cross-lingual, and multilingual."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Limited evaluations are only QA datasets (e.g., the main text only shows XLM-R and LaBSE). Also, the main text consists of many large tables where each does not present as much information as the space it takes, e.g., the authors could summarize how many languages/scenarios the proposed method shows improvements instead of providing large tables like Table 3, Table 4, Table 5, etc.\n\n2. It is not clear if the proposed method is actually effective. In many cases, the improvements appear rather small. For example, in Table 1, on XQuAD-R for XLM-R (0.792 vs 0.798; 0.705 vs 0.700; 0.593 vs 0.593). Are they even statistically significant?\n\n3. As this paper mainly provides empirical observations, it would be stronger if the paper provides insights on which scenario (e.g., what kind of base model or dataset) where hybrid batching is expected to show significant improvements and when it does not. The current paper pretty much reports experimental findings which could limit its usefulness. Several questions remain, for example, what is the size and mixed of training data does one need to see the impact of this hybrid batching? I expect that if there is limited training data, the impact would be marginal."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. What are the advantages of hybrid batch training strategy in terms of convenience, overall efficiency, and experimental effectiveness compared to existing multilingual information retrieval methods?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper proposes a hybrid batch training strategy to simultaneously improve zero-shot retrieval performance across monolingual, cross-lingual, and multilingual settings while mitigating language bias.\n2. The hybrid batch training strategy simply modifies the training data batches without necessitating the introduction of loss functions or new architectural components."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a simple method called hybrid batch training, which involves translating to obtain parallel data in multiple languages, and sampling these data to construct a multilingual training dataset. The model is trained by inputting monolingual or multilingual training data with a certain probability, thereby balancing its performance in both scenarios."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The proposed hybrid batch training strategy only modifies the input training data, which lacks novelty.\n2. This paper lacks sufficient analysis to the field of multilingual information retrieval. It does not adequately demonstrate the shortcomings of existing work nor the importance and necessity of this study.\n3. The experiments only compare the performance of different input strategies but not various multilingual information retrieval methods."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024synergistic,\ntitle={Synergistic Approach for Simultaneous Optimization of Monolingual, Cross-lingual, and Multilingual Information Retrieval},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zkNCWtw2fd},\nnote={under review}\n}"
},
"abstract": {
"value": "Information retrieval across different languages is an increasingly important challenge in natural language processing. Recent approaches based on multilingual pre-trained language models have achieved remarkable success, yet they often optimize for either monolingual, cross-lingual, or multilingual retrieval performance at the expense of others. This paper proposes a novel hybrid batch training strategy to simultaneously improve zero-shot retrieval performance across monolingual, cross-lingual, and multilingual settings while mitigating language bias. The approach fine-tunes multilingual language models using a mix of monolingual and cross-lingual question-answer pair batches sampled based on dataset size. Experiments on XQuAD-R, MLQA-R, and MIRACL benchmark datasets show that the proposed method consistently achieves comparable or superior results in zero-shot retrieval across various languages and retrieval tasks compared to monolingual-only or cross-lingual-only training. Hybrid batch training also substantially reduces language bias in multilingual retrieval compared to monolingual training. These results demonstrate the effectiveness of the proposed approach for learning language-agnostic representations that enable strong zero-shot retrieval performance across diverse languages."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Information Retrieval",
"Multilingualism and Cross-Lingual NLP",
"Question Answering"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/957319c99ea3b1b1f56edec0f51ef0515f25e466.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Synergistic Approach for Simultaneous Optimization of Monolingual, Cross-lingual, and Multilingual Information Retrieval"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zkn2tvtt8J | DiNO-Diffusion: Scaling Medical Diffusion Models via Self-Supervised Pre-Training | main | Active | Diffusion Models;Generative AI;Medical Imaging;Self-Supervision | generative models | 3;3;5;8 | 5;4;4;4 | 2;3;3;3 | 1;2;3;3 | 2;4;2;3 | 4.75 | 4.25 | 2.75 | 2.25 | 2.75 | -0.493742 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. How can synthetic images with semantic labels be generated for downstream tasks? From Figure 1, it seems that synthetic images cannot be generated with semantically meaningful conditions. The DM is conditioned on an image descriptor, which lacks semantics if the image is unannotated.\n\n2. In Figure 1, I don’t see any references to DiNO. How is DiNO incorporated into the proposed framework?\n\n3. In Figure 1b (i), the reconstructed image appears significantly different from the input image. How does the reconstruction network generate a horizontally flipped image?\n\n4. In Figure 2, the generated images, both reconstructed and interpolated, have lower intensity (appear darker) than real images. What is causing this? Are these images generated by DM using reconstruction and interpolation features clinically meaningful?\n\n5. I don’t see a clear distinction between data augmentation and fully synthetic training. Both approaches require real data to train the DM, so real data is utilized in both cases. Therefore, their results should theoretically be similar.\n\n6. Using image embeddings as conditions is novel, but how does it improve interpretability compared to using text embeddings? Image embeddings are derived from existing real images, and obtaining embeddings for unseen or out-of-distribution images is challenging (not mentioned in the paper). Additionally, it's unclear if mix-match image features is truly meaningful, as it is with text conditioning."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "+ The ablation study is thorough, with Table 1 examining multiple configurations of the proposed method. The comparisons among V1/V2, reconstruction/interpolation, and different rs ratios provide valuable insights.\n\n+ The authors include numerous qualitative visualizations, clearly illustrating the outputs of reconstruction- and interpolation-based methods for readers.\n\n+ The analysis of failure cases in Figure 5 is crucial for identifying the method's weaknesses and exploring areas for improvement."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a diffusion model designed to generate synthetic images for pre-training AI, which can supplement real images in downstream tasks. The proposed DiNO-Diffusion model offers the advantage of conditioning image generation on the images themselves, using features extracted by DiNO. Experimental results indicate that synthetic images provide effective data augmentation and hold potential for zero-shot segmentation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The novelty of the method is a concern, as there are no substantial modifications to Stable Diffusion. This is not necessarily a weakness if Stable Diffusion already performs the task effectively. However, since the authors introduce DiNO to the diffusion model (see title), it would be expected that they explain how DiNO is incorporated into Stable Diffusion. Yet, this integration is not shown in Figure 1 and is only briefly mentioned in the methods section.\n\n- It is unclear why the generated synthetic images are considered semantically diverse. The generated images rely solely on image features computed from the existing images in the training set or interpolated features from these images. There is no assurance that the generated images are out-of-distribution or that they can be controlled by interpretable features, as text-based conditioning would allow.\n\n- The proposed experimental settings—data augmentation and full synthetic training—are fundamentally similar. In the paper, data augmentation uses both real and synthetic images, while full synthetic training relies solely on synthetic images. However, since training the diffusion model (DM) requires real images, full synthetic training is not completely independent of real data; it merely uses the DM to encode information from the real data, allowing real images to be omitted during \"full synthetic training.\" Consequently, the results of these two settings are expected to be similar, as reported in the paper.\n\n- It is unclear how synthetic data can be used for downstream task training. Since downstream task training relies on supervised learning, how are annotations generated for the synthetic data?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Including a self-supervised encoder to assist medical image generation is a good idea. However, the training of latent diffusion model [1] can also leverage no annotated data. In addition, the latent diffusion model can incorporate multiple conditions such as text, images and labels. In my opinion, it would be better for the author to demonstrate the embeddings generated by the self-supervised encoder were more informative and powerful to guide the image generation. Specifically, I would suggest conducting experiments that directly compare downstream tasks performance (e.g., the segmentation) between using DiNO embeddings and others, like embeddings extracted from the encoder of latent diffusion models."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The motivation of the paper is clear, aiming to facilitate the training of diffusion models for medical dataset generation in sparsely-annotated settings.\n\n2. It is a good idea to apply self-supervised learning for diffusion model training.\n\n3. The results are well-presented."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes DiNO-Diffusion models, which leverage embeddings generated by a self-supervised transformer trained with DiNO methods. The approach is evaluated on the MIMIC-CXR dataset for three tasks: reconstruction, interpolation, and segmentation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The structure of DiNO-Diffusion is similar to latent diffusion models [1], where the conditional latent features extracted from an encoder are injected at each denoising step. It will be interesting to include latent diffusion models as the baseline to compare.\n\n2. Although the paper conducts 3 types of evaluations, only one dataset (MIMIC-CXR) serves as the testbed. In order to demonstrate the better generalization, the paper can be more strengthful by adding additional datasets. For example, including other chest X-ray datasets like CheXpert or MIMIC-CXR can be options. \n\n3. As a conditional diffusion model, it seems like the paper does not include sufficient baseline methods for comparison, for example, ControlNet[2] and SegGuidedDiff [3]. These methods can generate images given input conditions, and can be finetuned in an end-to-end manner. It would be beneficial for the authors to discuss in the paper why leveraging features extracted from self-supervised model might outperform end-to-end finetuning approaches.\n\n[1] Rombach, Robin, et al. \"High-resolution image synthesis with latent diffusion models.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.\n\n[2] Zhang, Lvmin, Anyi Rao, and Maneesh Agrawala. \"Adding conditional control to text-to-image diffusion models.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\n\n[3] Konz, Nicholas, et al. \"Anatomically-controllable medical image generation with segmentation-guided diffusion models.\" International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer Nature Switzerland, 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please see Sec. Weakness for details."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper is generally clear and easy to understand."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes the DiNO-Diffusion method, which uses self-supervised image representation to guide diffusion for chest X-ray images and evaluates the diffusion model on three tasks: reconstruction, interpolation, and zero-shot lung lobe segmentation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper does not demonstrate the value of DiNO-Diffusion in the medical domain. The three evaluation methods mentioned—reconstruction, interpolation, and zero-shot lung lobe segmentation—do not seem to hold significant value for the medical field, in my understanding. In the context of X-ray scanning, what clinicians are more concerned about is the ability to detect lesions from images, particularly those that current X-ray models cannot address, but which could potentially be solved using DiNO-Diffusion. Critical evaluations and analyses are lacking in this regard.\n\n2. The novelty of the method is limited. There are similar works that use self-supervised models to guide diffusion [1,2,3]; the basic idea is typically to use self-supervised image embedding as a condition or to utilize self-supervised representation for clustering to generate pseudo labels as conditions. The method presented in this paper falls into this category but does not directly reference these works or perform comparisons with them.\n\n[1] Vincent et al., Self-Guided Diffusion Models, CVPR 2023\n\n[2] Vincent et al., Guided diffusion from self-supervised diffusion features, 2023.\n\n[3] Alexandros et al., Learned representation-guided diffusion models for large-image generation, CVPR 2024.\n\n3. The related work section is insufficient and lacks a detailed survey and comparison of works that guide diffusion with self-supervised methods."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Extensive experiments on the large dataset make this work a solid submission. I have no questions except those mentioned in the weaknesses."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Originality:\n1. Self-Supervised Training: DiNO-Diffusion introduces a novel approach for training diffusion models without large annotated datasets, addressing a key limitation in medical imaging.\n2. Creative Integration: Combines diffusion models with image embeddings from DiNO, a pretrained vision transformer, showcasing an innovative fusion of advanced techniques.\n\nQuality:\n1. Robust Experimental Design: Utilizes over 868k unlabeled chest X-Ray (CXR) images, ensuring the method's scalability and reliability.\nComprehensive Evaluation: Assesses performance using FID scores, AUC improvements, and Dice scores, providing a thorough validation of the approach.\n2. Strong Results: Demonstrates significant enhancements in classification (up to 20% AUC increase) and impressive zero-shot segmentation (up to 84.4% Dice score), highlighting effectiveness.\n\nClarity:\n1. Well-Organized Structure: Clearly structured sections on methodology, experiments, and results facilitate easy understanding.\nDetailed Explanations: Thoroughly explains key components like DiNO embeddings and their integration with diffusion models.\n2. Effective Visuals: Uses visual aids to illustrate qualitative comparisons and segmentation outcomes, enhancing comprehension.\n\nSignificance:\n1. Overcoming Data Scarcity: Enables training of diffusion models without extensive annotations, broadening their applicability in medical imaging.\n2. Enhancing Downstream Tasks: Improves data augmentation, leading to significant gains in classification and segmentation performance.\n3. Scalability and Adaptability: Easily adaptable to other medical imaging modalities and compatible with state-of-the-art diffusion models, supporting large-scale, multi-domain applications."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents DiNO-Diffusion, a self-supervised method designed to train Diffusion Models (DMs) for medical imaging without the need for large annotated datasets. Traditional DMs require extensive labeled data, which is often scarce in medical applications. DiNO-Diffusion addresses this limitation by conditioning the generation process on image embeddings extracted from DiNO, a pretrained vision transformer, allowing the use of over 868k unlabeled chest X-Ray (CXR) images from public datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. My main concern for the medical segmentation task is whether the performance on zero-shot segmentation can exceed the current leading methods like MEDSAM or MEDSAM2.\n2. I am curious if the dataset will be made available to the community.\n3. It would be better to define DiNOv1-Diffusion and DiNOv2-Diffusion in the caption of the first figure."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Self-supervision is a viable strategy to train diffusion models in medical imaging, where annotations are scarce and/or fragmented"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024dinodiffusion,\ntitle={Di{NO}-Diffusion: Scaling Medical Diffusion Models via Self-Supervised Pre-Training},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zkn2tvtt8J},\nnote={under review}\n}"
},
"abstract": {
"value": "Diffusion models (DMs) require large annotated datasets for training, limiting their applicability in medical imaging where datasets are typically smaller and sparsely annotated. We introduce DiNO-Diffusion, a self-supervised method for training DMs that conditions the generation process on image embeddings extracted from DiNO, a pretrained vision transformer. By not relying on annotations, our training leverages over 868k unlabelled images from public chest X-Ray (CXR) datasets. DiNO-Diffusion shows comprehensive manifold coverage, with FID scores as low as 4.7, and emerging properties when evaluated in downstream tasks, allowing to generate semantically-diverse synthetic datasets even from small data pools, demonstrating up to 20\\% AUC increase in classification performance when used for data augmentation. Results suggest that DiNO-Diffusion could facilitate the creation of large datasets for flexible training of downstream AI models from limited amount of real data, while also holding potential for privacy preservation. Additionally, DiNO-Diffusion demonstrates zero-shot segmentation performance of up to 84.4\\% Dice score when evaluating lung lobe segmentation, evidencing good CXR image-anatomy alignment akin to textual descriptors on vanilla DMs. Finally, DiNO-Diffusion can be easily adapted to other medical imaging modalities or state-of-the-art diffusion models, allowing large-scale, multi-domain image generation pipelines for medical imaging."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Diffusion Models",
"Generative AI",
"Medical Imaging",
"Self-Supervision"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/00555fbc2a7155effaeebe7bb6ccd7ce2b43a9a5.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "DiNO-Diffusion: Scaling Medical Diffusion Models via Self-Supervised Pre-Training"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zl0HLZOJC9 | Probabilistic Learning to Defer: Handling Missing Expert Annotations and Controlling Workload Distribution | main | Active | learning to defer;expectation - maximisation | other topics in machine learning (i.e., none of the above) | 6;6;6;8 | 2;2;3;3 | 3;3;3;3 | 3;3;3;3 | 3;3;3;3 | 6.5 | 2.5 | 3 | 3 | 3 | 0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "+ Q1. I found the data training process and the L2D objective on page 3 difficult to understand, possibly due to ambiguous notation. Does $\\mathbf{y}$ represent the ground truth label or output prediction? If $\\mathbf{y}$ refers to the ground truth, why does it depend on the expert annotations $\\mathbf{t}$? Should not $\\mathbf{y}$ be dependent on $\\mathbf{x}$ but independent of expert selection and annotation? The notation in Eq. 1, where $\\mathbf{t}_i$ becomes $\\mathbf{y}_i$, is particularly confusing. Revising the notation could improve clarity and help readers understand the logic.\n+ Q2. Regarding Eq. 1, if $\\mathbf{t}_i$ are deterministic expert annotations from a look-up table and $\\mathbf{y}_i$ represents the ground truth label, is it still valid to compute the log-likelihood with *hard labels*?\n+ Q3. Could you provide a more detailed explanation of why the accuracy of probabilistic L2D surpasses that of the best human expert (even with a 70% missing rate and especially for low coverages)? This was not explained in the discussion.\n+ Q4. Why does the area under the coverage-accuracy curve increase with higher missing rates on the MiceBone dataset? Why does having fewer annotations lead to better performance? It is mentioned that this is due to inconsistent human expert behavior between the training and test sets. How do missing annotations help in this case?\n+ Q5. The discussion on balanced workload and inconsistent human performance between training and test sets could benefit from further elaboration (Page 8, Line 423). Would assigning less weight to a particular expert act as a form of regularization?\n+ Q6. (Minor suggestion) It would be helpful to present the results in Figure 3 with equal Y-axis scales for subplots corresponding to the same dataset."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "+ The paper addresses a research question relevant to real-world applications by providing a solution for settings where expert annotations are incomplete. \n+ The results show that reducing the workload of highly accurate (and typically overloaded) human experts only slightly decreases overall accuracy and can lead to higher accuracy in scenarios with inconsistent expert performance between the training and test sets. \n+ The proposed controllable workload formulation simplifies the evaluation of accuracy-coverage ratios compared to existing methods, which often require assumptions or post-hoc adjustments to balance learnable models and human experts."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper extends the concept of *learning to defer* (L2D) to scenarios with missing expert annotations and balanced expert workloads. The authors propose a formulation that relies on a clever application of the expectation-maximization algorithm, which naturally handles missing data. Additionally, they introduce a constraint within the expectation stage of the algorithm to manage expert workloads. The proposed L2D is tested on both synthetic and real-world datasets, resulting in a higher area under the coverage-accuracy curve compared to the evaluated baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "+ As acknowledged by the authors, the proposed formulation does not scale well with the number of human (or learnable) experts. While grouping experts into clusters is suggested as potential future research direction, this introduces the number of clusters as a hyperparameter, necessitating additional tuning and potentially hindering scalability.\n+ Although the paper is concise and generally well-written, the notation is ambiguous in some places (see Q1 and Q2), and the discussion of the results is very brief and could benefit from additional explanations (see Q3 and Q4). \n+ (Minor comment) I recommend the authors release the source code to reproduce results. While not mandatory, providing the code would help readers understand how to implement the algorithm proposed on page 14, especially the implementation steps required to solve the optimization equation formulated in Eq. 4 on page 4."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I am unsure about the computational cost of the method compared to the considered baseline. Could the authors elaborate more on this? Ideally, by comparing and reporting the runtimes of the proposed method and the baselines across the different datasets."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is well-written and easy to follow. The proposed probabilistic modeling techniques and the use of EM in this setting seem novel and an interesting contribution. Experimental results show the performance gain of the method compared to the baselines."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies the problem of learning to defer for a realistic setting where expert annotations can be missing and where workload balancing among experts is crucial. The authors propose a probabilistic modeling approach where EM is used to address the missing annotations, In particular, a constrained optimization during the E step regulates workload balancing among human experts and the AI classifier. The proposed method is evaluated on synthetic and real-world datasets and is shown to perform on par or better than the considered baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "A key weakness is highlighted by the authors in the paper: Bad dependency on the number of human experts. Although, they discuss potential remedies, e.g., clustering. However, this probably wouldn't work for a setting with diverse human experts (where the number of clusters is large). Are there other dimensionality reduction approaches (e.g., hierarchical clustering) that one could consider for this setting and how would they affect computational cost?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Could the authors answer points/questions 2 and 3 in the list above \"weaknesses\"?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper addresses an interesting issue in L2D and proposes a sound solution based on a probabilistic approach. \n- The workload management is particularly promising in many areas where AI is supporting expert decision such as in medicine. \n- This is also relevant in addressing ethical and practical constraints, and possibly even regulations and laws. \n- The ablation study offers an insight on the mechanism that lead to prioritise highest performing humans with the imbalanced approach, with possible overfitting.\n- It is interesting that the study allows for the conclusion that in practice it may be desirable to distribute workload evenly across all human experts."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a probabilistic framework for “Learning to Defer” (L2D) that enables an AI system to either make decisions independently or defer them to human experts based on confidence levels. The approach addresses the limitations of existing L2D methods by handling cases where not all experts can annotate every data sample. This approach is designed to reduce the annotation burden. The approach is based on the Expectation-Maximization (EM) algorithm to optimize workload distribution between AI and human experts. The proposed method shows promising performance across synthetic and real-world datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Overall, the approach has some limitations, which I acknowledge are also partially discussed. However, it's unclear how well the system can scale given that each expert requires a probabilistic model. It's unclear to me how well the clustering of expert would work and what are the risks associated with that. \n\n2. I would be interested in reading more about the trade-off between the case for fewer deferring cases or deferring cases with the highest uncertainty, which is not much discussed. Clearly, there will be cases, e.g., healthcare, where deferring on uncertain cases would be quite important. \n\n3. How could the model be adapted to take into consideration fast and slow changing expertise performance? The model assume static performance, however, experts could have fast performance changes, e.g. due to fatigue, or slow performance changes, e.g. due to learning through a period of time. It would be nice to understand how the model could accommodate for such dynamic scenarios."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* Would it make sense to repeat the learning experiments several times in order to be able to estimate the uncertainty of the results?\n* Does it make sense to give the results in Table 1 with four valid digits? Are the results really that accurate?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* It seems to me that the topic has been addressed very comprehensively\n* The comparisons include all the mentioned relevant predecessor methods"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper deals with human - AI cooperation. It presents a modification of the learning to defer technique which can handle incomplete expert annotations and balance the workload of the experts."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I can't see any significant weaknesses. However, this may also be because the topic is new to me.\n\nFurther comments:\n\n* In the case of “In contrast, machine learning or AI models excel at processing large amounts of information but may be prone to biases (Meehl, 1954)”, the reference chosen cannot be used as evidence for the statement because “machine learning or AI models ... processing large amounts of information” were not available until long after 1954.\n* I find statement “Ideally, a perfect balanced workload among experts and the AI model can be expressed as follows” a little strange. After all, you will only strive for an equal distribution if all experts are equally competent.\n* I wonder about “slightly-similar”, how can something be slightly similar?\n* I find it a bit irritating that there is no section called “Conclusion”.\n* “50 %” -> “50%”"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024probabilistic,\ntitle={Probabilistic Learning to Defer: Handling Missing Expert Annotations and Controlling Workload Distribution},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zl0HLZOJC9},\nnote={under review}\n}"
},
"abstract": {
"value": "Recent progress in machine learning research is gradually shifting its focus towards *human - AI cooperation* due to the advantages of exploiting the reliability of human experts and the efficiency of AI models. One of the promising approaches in human - AI cooperation is *learning to defer* (L2D), where the system analyses the input data and decides to make its own decision or defer to human experts. Although L2D has demonstrated state-of-the-art performance, in its standard setting, L2D entails a severe limitation: all human experts must annotate the whole training dataset of interest, resulting in a slow and expensive annotation process which can subsequently influence the size and diversity of the training set. Moreover, the current L2D does not have a principled way to control workload distribution among human experts and the AI classifier that is important to optimise resource allocation. We, therefore, propose a new probabilistic modelling approach inspired from mixture-of-experts, where the Expectation - Maximisation algorithm is leveraged to address the issue of missing expert's annotations. Furthermore, we introduce a constraint, which can be solved efficiently during the E-step, to control the workload distribution among human experts and the AI classifier. Empirical evaluation on synthetic and real-world datasets show that our proposed probabilistic approach performs competitively, or even surpasses previously proposed methods assessed on the same benchmarks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"learning to defer",
"expectation - maximisation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/7a9d3db15c9e3cb5607848db6a2176642fe3e2e2.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Probabilistic Learning to Defer: Handling Missing Expert Annotations and Controlling Workload Distribution"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zl3nFqY8l1 | RuleRAG: Rule-Guided Retrieval-Augmented Generation with Language Models for Question Answering | main | Active | Rule-Guided Retrieval;Rule-Guided Generation;RAG;Question Answering | neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.) | 3;5;5;6;6 | 3;3;3;4;3 | 3;3;2;3;3 | 2;3;2;3;3 | 3;3;3;4;3 | 5 | 3.2 | 2.8 | 2.6 | 3.2 | 0.456435 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "None"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- **About the retrieval and generation**: I'm not sure whether the mappings between rules and retrieved results are recorded and used in the generation. \n- **About the conclusions in Section 5.2**: The second conclusion (i.e., the introduced rules can provide better guidance when using larger models with the same LLM architecture) and the third one (RGFT is fairly effective and necessary for lightweight LLMs) are somewhat contradictory."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- **Neural symbolic method**: RuleRAG explicitly incorporates symbolic rules into a neural language model, providing clear guidance for the retrieval stage and ensuring that the generated answers are logically consistent with the retrieved information.\n- **Comprehensive design and evaluation**: RuleRAG-ICL and RuleRAG-FT are designed for different scenarios, and both demonstrate strong generalization capabilities with various LLMs and retrievers."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper designed a retrieval-augmented generation framework named RuleRAG to address the limitations of existing RAG approaches in knowledge-intensive question answering. RuleRAG leverages symbolic rules to guide the retrieval and generation processes \n to ensure that retrieved documents are logically relevant to the query and the generated answers properly refer to the retrieved documents. To validate the effectiveness of the proposed method, the paper introduces five new QA benchmarks that require reasoning and utilize rules based on existing benchmarks. Experimental results show that RuleRAG achieves strong performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- **Possibly limited scope of application**: I'm not sure whether the symbolic rules could be applied to real-world applications, since the motivating example is somewhat straightforward.\n- **Limited rule learning method**: Obtaining high-quality rules is non-trivial. This paper leverages some rule induction tools like AMIE on structured knowledge resources, and what if unstructured text?\n- **Unclear evaluation benchmark**: I do not understand why to construct evaluation data based on these benchmarks, which are not designed for the knowledge-intensive task. Why not consider more popular complex KBQA benchmarks, e.g., HotpotQA? On the other hand, these benchmarks may be seen when model training. Such choices may influence the convincingness of the conclusions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Why do the authors not choosing some of the SOTA decoder-only retrievers, such as E5-mistral-7b-instruct? Will it better help the ICT compared to the BERT based methods, e.g. DPR, contriever?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper presents a novel research direction to add rules from knowledge base to help question answering.\n2. The experiments are solid with many existing SOTA models with convincing results.\n3. The authors also demonstrates the generalization of their rules.\n4. The code is open sourced which will help the community."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a novel method that incorporates the rules from the existing knowledge base to aid the retrieval augmented generation. The rules are included in both retrieval and generation stage where the improvements are quite significant compared to the vanilla approach. The authors conduct extensive experiments with state-of-the-art LLMs and demonstrate the generalization of their approach."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Which mining method is the best or how to choose the mining method is missing in the paper"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How to control and estimate the quality of the constructed dataset?\n2. The dataset link should also be anonymous."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper proposes a graph augmented generation framework, RuleRAG, aimed at improving the RAG system by incorporating entity relationships from the knowledge base.\n2. The authors clearly outline their approach, including both RuleRAG-ICL and RuleRAG-FT. Details of the retrievers and generators are comprehensively explained, along with an in-depth description of the dataset construction.\n3. New rule-aware benchmarks were created to evaluate the proposed method, and extensive experiments demonstrate the method’s effectiveness on these benchmarks ."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed RuleRAG, a new approach to improve the performance of Retrieval-Augmented Generation (RAG) systems. Specifically, RuleRAG takes symbolic rules from the knowledge graph and introduces them into the retriever and generator. The proposed approach consists of two versions: RuleRAG-ICL (context learning) and RuleRAG-FT (fine-tuning), which have shown significant performance gains in several benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. This paper proposes RuleRAG and claims that it uses a rule-guided method to build a RAG framework for question and answer tasks. However, the experiments conducted are not comprehensive enough: this paper tests the performance of the proposed method only on a self-constructed RuleQA-series dataset, which seems to be a KBQA task in disguise . Many representative RAG tasks such as NQ, TriviaQA (for short-form QA), ASQA ( for long-form QA), HotPotQA (for multi-hop QA) are not tested, which leads us to not be able to fairly observe the performance of RuleRAG in different scenarios.\n\n\n2. The RuleQA dataset proposed in this paper is constructed based on entities in the KB. However, the rules in the rule base, the queries in the test dataset and the documents in the corpus are also derived from the KB, does it mean that for every query in RuleQA, there exist exactly corresponding rules and documents to answer the question? Such a construction seems to be more favorable for RuleRAG, as it makes RuleRAG have correspondences for acquiring and introducing rules, however this is not possible in real-world QA. The authors should conduct experiments on a wide range of publicly credible datasets to validate the effectiveness of their approach.\n\n3. Some methods, such as IRCOT, Self-RAG and GraphRAG, are not compared. They also focus on extracting more accurate query-related contents for building a more effective RAG system.\n\n4. The backbone retriever is not new. Some methods, such as ANCE and BGE, are not compared."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See the weakness."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper conducted extensive experiments and analyses to validate the effectiveness of the proposed method, despite some biases in the experimental setup.\n2. The motivation for introducing rules to enhance the utilization of documents is reasonable, and experiments have shown that this method indeed helps improve the model's performance on the evaluation datasets compared to direct RAG."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel framework for improving knowledge-intensive question answering (QA) by integrating symbolic rules into the Retrieval-Augmented Generation (RAG) paradigm. The authors identify two main limitations of standard RAG models: 1) insufficient retrieval relevance, as retrievers often fail to capture logical relationships, and 2) lack of explicit guidance for language models on how to utilize retrieved documents.\n\nTo address these, RuleRAG incorporates symbolic rules to guide both the retrieval and generation phases, enhancing answer accuracy. The framework operates in two modes: **RuleRAG-ICL** (In-Context Learning), which uses rules to steer retrieval and generation in a training-free manner, and **RuleRAG-FT** (Fine-Tuning), which fine-tunes models to strengthen rule adherence. Additionally, the authors develop five rule-aware QA benchmarks to test the system across temporal and static knowledge scenarios. Experimental results show effective improvements in retrieval relevance (Recall@10) and answer accuracy (Exact Match) compared to standard RAG, showcasing RuleRAG's ability to generalize to unseen rules and scale effectively."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The evaluation set is not robust enough: The method's self-constructed dataset seems to favor scenarios where rule-based approaches are required to answer correctly, which introduces some bias. The authors did not evaluate the method on more widely and commonly used benchmarks such as NQ, TQ, HotpotQA, StrategyQA, etc.\n\n2. The baseline models are not comprehensive: Comparing only with direct RAG seems somewhat weak. Currently, there are more variants of RAG, such as decomposing questions before retrieval, which can also address multi-hop retrieval to some extent.\n\n3. Limited generalizability: The paper does not conduct experiments on a broader range of datasets, making it difficult to demonstrate the method's generalizability, especially in scenarios where large models have been fine-tuned."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. How does RuleRAG handle exceptions or anomalies, particularly for domain-specific cases (e.g., outliers in temporal data or unrecognized entities) that do not conform to the predefined rules? \n2. Given that the quality and coverage of these rules can significantly impact overall performance, what specific strategies can be implemented to ensure the diversity and representativeness of the rules extracted from the knowledge graph? For example, how can the authors incorporate techniques to evaluate rule completeness or prioritize rules based on domain relevance? \n3. At times, the retrieved content might not directly include answers or rules, but still plays a crucial role in helping an LLM understand and respond to inquiries. How does RuleRAG ensure that this valuable contextual information is retained, particularly in cases where the relevant content may be ambiguous or indirect?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. RuleRAG uniquely combines rule-based guidance with retrieval and generation, showing promising results in knowledge-intensive QA.\n2. RuleRAG has the capability to generalize beyond the rules it has been specifically trained on, albeit with less than optimal performance. This ability highlights its significant advantage and potential for broader applications beyond its original rule set.\n3. The construction of five rule-aware QA benchmarks, including both temporal and static scenarios, provides a thorough evaluation of RuleRAG's capabilities and its scalability across different QA tasks.\n4. RuleRAG shows significant improvement over standard RAG, and the experimental results appear solid."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents RuleRAG, an innovative approach that combines rule-guided retrieval and generation for knowledge-intensive question answering. This method addresses the limitations within existing RAG frameworks by utilizing symbolic rules to direct both retrievers and generators. RuleRAG enhances performance through in-context learning and fine-tuning processes. Additionally, it establishes rule-aware QA benchmarks and shows substantial improvements over standard RAG across various metrics."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The performance of the RuleRAG framework is heavily dependent on the quality and coverage of the mined rules, which may not always be comprehensive or accurate. The effectiveness of RuleRAG hinges on the accuracy of the rule-mining process; if the algorithms used (AMIE3 for static KGs and TLogic for temporal KGs) fail to extract high-confidence rules, the performance of the entire framework could be compromised.\n2. The paper highlights RuleRAG's potential vulnerability to irrelevant or misleading rules, a critical issue for question-answering systems using external knowledge. It lacks detailed mechanisms for filtering such rules, risking suboptimal performance during retrieval and generation phases. The authors could consider implementing or evaluating specific filtering mechanisms, such as incorporating a rule relevance scoring step, or exploring the feasibility of using the LLM itself to assess rule applicability prior to retrieval. \n3. This paper emphasizes individual rules and their impact on QA performance but does not explore the interactions between multiple rules or complex rule hierarchies, which are essential for handling more sophisticated queries."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We point out two high-level issues of current RAG and propose a method named RuleRAG, including RuleRAG-ICL and RuleRAG-FT, to effectively improve the performance of multiple retrievers and generators by rule-guided retrieval and generation."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024rulerag,\ntitle={Rule{RAG}: Rule-Guided Retrieval-Augmented Generation with Language Models for Question Answering},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zl3nFqY8l1},\nnote={under review}\n}"
},
"abstract": {
"value": "Retrieval-augmented generation (RAG) framework has shown promising potential in knowledge-intensive question answering (QA) by retrieving external corpus and generating based on augmented context. However, existing approaches only consider the query itself, neither specifying the retrieval preferences for the retrievers nor informing the generators of how to refer to the retrieved documents for the answers, which poses a significant challenge to the QA performance. To address these issues, we propose Rule-Guided Retrieval-Augmented Generation with LMs, which explicitly introduces symbolic rules as demonstrations for in-context learning (RuleRAG-ICL) to guide retrievers to retrieve logically related documents in the directions of rules and uniformly guide generators to generate answers attributed by the guidance of the same set of rules. Moreover, the combination of queries and rules can be further used as supervised fine-tuning data to update retrievers and generators (RuleRAG-FT) to achieve better rule-based instruction following capability, leading to retrieve more supportive results and generate more acceptable answers. To emphasize the attribution of rules, we construct five rule-aware QA benchmarks, including three temporal and two static scenarios, and equip RuleRAG with several kinds of retrievers and generators. Experiments demonstrate that training-free RuleRAG-ICL effectively improves the retrieval quality of +89.2\\% in Recall@10 scores and generation accuracy of +103.1\\% in exact match scores over standard RAG on average across the five benchmarks, and further fine-tuned RuleRAG-FT consistently yields more significant performance enhancement. Extensive analyses indicate that RuleRAG scales well with increasing numbers of retrieved documents and exhibits generalization ability for untrained rules. Our code and benchmarks are available at [https://anonymous.4open.science/r/ICLR2025_RuleRAG_ICL_FT](https://anonymous.4open.science/r/ICLR2025_RuleRAG_ICL_FT)."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Rule-Guided Retrieval",
"Rule-Guided Generation",
"RAG",
"Question Answering"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/20021f0f06d7c2f873d090e4ddc98aa1fd02c153.pdf"
},
"presentation": null,
"primary_area": {
"value": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/331e338ba1e39079a454c083741e41d7cd8a6e85.zip"
},
"title": {
"value": "RuleRAG: Rule-Guided Retrieval-Augmented Generation with Language Models for Question Answering"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zl3pfz4VCV | MMTEB: Massive Multilingual Text Embedding Benchmark | main | Active | natural language processing;benchmark;sentence embeddings;multilingual | datasets and benchmarks | 5;6;8;8 | 5;3;4;4 | 3;3;4;3 | 3;3;4;4 | 2;2;3;3 | 6.75 | 4 | 3.25 | 3.5 | 2.5 | -0.272166 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- How do the top models on MTEB leaderboard do on this new dataset and whether this new dataset changes the ranking of the leaderboard?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper is well written and the main points are clearly communicated\n- The dataset is a great extension to the MTEB and would be a good resource to research community towards building largescale multilingual embedding models\n - The coverage of the dataset is great"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a new set of benchmarks called \"massive multilingual text embedding benchmark\". This benchmark includes more than 500 tasks and covers a lot of low resource languages as well. They also introduce downsampling technique such that the resources required for the evaluation is minimized."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Based on table9, one limitation is that most of the crowd submissions are already based on existing public datasets from multiple language domains and not particularly for this dataset construction effort."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- How does the evaluation score compare with other embedding benchmarks? For instance, since the AIR-Benchmark doesn’t disclose its evaluation sets, does this mean there might be no overlap with MMTEB?\n- Why do smaller models perform better in multilingual contexts while larger models excel on English datasets? Is this pattern unique to comparisons involving only the e5 models?\n- Could you provide more MMTEB benchmark results using LLM embedding models from literature, such as SFR-Embeddings, NV-Embed, bge, and Qwen models? Since some of these models are English-based, please include their results in Table 15.\n- In Table 9, could you provide the sample counts (queries and documents) for each task? Additionally, please list the 1000 languages and 500 quality-controlled evaluation tasks with examples.\n- What is the sample count for each language, and is there an imbalance in sample numbers between languages? Why is it necessary to collect samples exhaustively from native speakers? Could machine translation help address sample imbalance?\n- Could you provide contributor statistics, such as distribution across countries, native speakers, domains, and similar tasks?\n- Since MMTEB appears to cover most of existing public embedding evaluation datasets, has there been any further data collection, annotation, or synthetic dataset creation for MMTEB? If so, please provide details.\n- Paper does not properly explain the code and long-document benchmarks. Could you provide details on these benchmarks and the performance numbers for models from the literature?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Given that recent embedding models often shows the trends for optimized for MTEB benchmark tasks, it would be valuable to develop larger-scale benchmarks that include a broader range of tasks.\n- Additionally, MMTEB downscales datasets and caches embeddings to help alleviate computational bottlenecks during evaluation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "MMTEB addresses the limitations of traditional text embedding evaluations to extend the current popular MTEB benchmark to over 500 quality-controlled tasks across thousand languages, making it the largest multilingual collection for embedding model evaluation. MMTEB is a large-scale, open collaboration benchmark where the contributors have diverse backgrounds and introduce diverse and challenging tasks, such as instruction following, long-document retrieval, and code retrieval."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "In general, dataset information (such as sample numbers, multilingual types, etc) and relevant model benchmark numbers are missing. Find more details in questions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. In Lines 86 to 93, could you provide a more intuitive metric for comparing computational resources of different benchmarks, such as the time required to complete evaluations on a single A100 GPU?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "1. I believe the efforts to reduce the computational resources required for evaluation are very meaningful, as they will encourage more researchers from low-resource language regions to use this benchmark. If MMTEB had simply expanded the scale of MTEB, it could be expected that most strong baseline models would originate from commercial companies with high computational resources, which could hinder the rapid development of text embedding research.\n2. Each computational resource optimization strategy is described in detail, and the methods are easy to implement, which facilitates the adaptation of custom datasets."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces MMTEB, a massive multilingual text embedding benchmark that covers over 500 tasks in more than 1,000 languages. Compared to previous benchmarks, MMTEB considers the “low-resource double bind” during its construction and significantly reduces the computational resources needed for evaluation through various strategies while preserving the relative ranking of models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The depth of analysis across different datasets seems inconsistent. For instance, the “Clustering” section in 2.3.1 provides an average Spearman correlation, but the “Retrieval” and “Bitext Mining” sections lack similar metrics. Moreover, as seen from the results in Appendix C.1.2, the selection of “Retrieval” strategy is based on analyses from only the NQ and TREC-COVID datasets, which may lead to biased hyperparameter selection. Although the current level of detail is already quite high, given MMTEB’s potential impact, I believe further detail would only be beneficial.\n2. The abstract mentions “a diverse set of challenging, novel tasks such as instruction following, long-document retrieval, and code retrieval,” but I saw little content related to these datasets in the paper. I think the authors should clearly explain:\n- **Why were these tasks included in MMTEB?** (This is a benchmark for multilingual text embeddings, yet instruction retrieval is currently available only in one language.)\n- **How were these new tasks integrated into the benchmark?** (I believe directly including long-document retrieval under the “retrieval” category should be done with caution, as it would require researchers to consider incorporating long-document-related datasets in their training data, which to some extent runs counter to the goal of addressing the “low-resource double bind.”)\n- **How will these new tasks impact model performance?** (The context length limitation of models such as multilingual-e5-large-instruct could hinder their performance on tasks like long-document retrieval. In the LongEmbed evaluations [1], it performs worse than the Mistral-based version. Additionally, the results in Table 16 show that the Mistral-based models perform better on MTEB (code). Thus, claiming the exceptional performance of multilingual-e5-large-instruct in the Introduction without further clarification may mislead readers.)\n\n[1] LongEmbed: Extending Embedding Models for Long Context Retrieval. arXiv:2404.12096"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Should Section 3 be titled “Experimental Settings” instead of “Results” to better reflect its content?\n2. The evaluation metrics and the main metrics for certain tasks are not described, e.g. instruction retrieval, reranking, multi-label classification.\n3. Should bitext mining and STS be considered closely-related task categories in Figure 1?\n4. Summarization showed minimal correlation with embedding performance in MTEB. If it is still included in MMTEB, what justifies its inclusion?\n5. Does MMTEB include programming language benchmarks, such as CoIR? Additionally, what criteria of multilingualism are used to determine inclusion in the study?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "(+++) MMTEB exemplifies a remarkable community-driven effort, engaging diverse contributors and fostering inclusivity.\n\n(+++) Introduces computational optimizations like downsampling and hard negative sampling, reducing evaluation costs to 3.11 hours on a 7B model (H100 GPU), making it accessible to low-resource settings.\n\n(++) Covers over 500 tasks across 10 categories in more than 1,000 languages, with a strong focus on low-resource languages and domains. But it lacks enough justification demonstrating the quality and value of each dataset. Provides an open-source, public leaderboard that encourages continuous contributions to advancing multilingual embedding research.\n\n(+) Expands traditional benchmarks by including new task types like instruction following, long-document retrieval."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces MMTEB, an extensive evaluation suite designed to assess text embedding models across over 1,000 languages and 500 tasks, serving as a multilingual extension to previous benchmarks like MTEB. MMTEB includes novel task categories, such as instruction following, long-document retrieval, and code retrieval. A significant contribution of MMTEB is its introduction of computational optimizations, including downsampling and hard negative sampling, which reduce compute requirements and enhance accessibility. The authors' findings reveal that smaller, instruction-tuned multilingual models outperform larger monolingual models in low-resource language settings."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The study lacks a clear articulation of the specific knowledge gap that MMTEB addresses beyond what MTEB has already achieved in evaluating multi-task capabilities of embedding models. The results section suggests that multilingual scores are closely aligned with English, making it difficult to discern the unique value that MMTEB offers. Additional analysis could better exploit the benchmark's value and clarify its unique contributions.\n\n2. While MMTEB aims to include as many relevant datasets as possible, it is unclear how these datasets were constructed or validated. Details on dataset quality, annotation methods (e.g., human vs. model-generated), and statistics (e.g., query-document ratios) would enhance transparency and reliability, especially given some datasets may be model-generated, such as FollowIR.\n\n3. The paper mentions retaining the top 250 ranked documents per query for each dataset and model but does not specify which model(s) were used to select these hard negatives. Clarifying this would help assess the robustness of the benchmark's retrieval tasks.\n\n4. The combination of 132 tasks makes it challenging to interpret a model's performance on specific languages or language families. While geopolitical categorization is helpful, further segmentation by language, domain, or specific capabilities could provide a more systematic and granular view of model performance. Expanding on the existing MTEB language families in Appendix H could offer researchers a clearer understanding of model weaknesses by language or domain."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce the Massive Multilingual Text Embedding Benchmark (MMTEB) including 500+ tasks across 1,000+ languages, greatly expanding multilingual evaluation for embeddings."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024mmteb,\ntitle={{MMTEB}: Massive Multilingual Text Embedding Benchmark},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zl3pfz4VCV},\nnote={under review}\n}"
},
"abstract": {
"value": "Text embeddings are typically evaluated on a narrow set of tasks, limited in terms of languages, domains, and task types. To circumvent this limitation and to provide a more comprehensive evaluation, we introduce the Massive Multilingual Text Embedding Benchmark (MMTEB) -- a large-scale community-driven initiative expanding MTEB to over 500 \\textit{quality controlled} evaluation tasks across 1,000+ languages. MMTEB includes a wide range of challenging novel tasks such as instruction following, long-document retrieval, and code retrieval, and represents the largest multilingual collection of evaluation tasks for embedding models to date. We use this collection to construct multiple highly multilingual benchmarks. We evaluate a representative set of models on these benchmarks.\nOur findings indicate that, while LLM-based models can achieve state-of-the-art performance on a subset of languages, the best-performing publicly available model across languages is the notably smaller, multilingual-e5-large-instruct.\n\nMassive benchmarks often impose high computational demands, limiting accessibility, particularly for low-resource communities. To address this, we downsample tasks based on inter-task correlation (i.e., selecting only a diverse set of tasks) while preserving relative rankings.\nWe further optimize tasks such as retrieval by sampling hard negatives, creating smaller but effective splits. These optimizations allow us to introduce benchmarks at a significantly lower computational cost. For instance, we introduce a new zero-shot English benchmark that maintains a similar ordering at a fraction of the cost."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"natural language processing",
"benchmark",
"sentence embeddings",
"multilingual"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a2b9ccc978c6f8877f7a503f186849e73462d8d3.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "MMTEB: Massive Multilingual Text Embedding Benchmark"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zlAUnwhE2v | ChemThinker: Thinking Like a Chemist with Multi-Agent LLMs for Deep Molecular Insights | main | Active | Molecular Property Prediction;Molecular Representation Learning;Multi-Agent LLMs | applications to physical sciences (physics, chemistry, biology, etc.) | 1;3;3;5 | 4;4;5;4 | 2;1;2;2 | 1;2;2;2 | 1;2;2;2 | 3 | 4.25 | 1.75 | 1.75 | 1.75 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Can you provide a more detailed explanation of how the three perspectives (general molecular properties, data-driven analysis, and task-specific factors) are integrated within the multi-agent framework? Specifically, how do these perspectives interact to influence the LLM's internal representations?\n\n2. How does ChemThinker differentiate itself from existing models that also utilize LLM embeddings with MLPs? What specific contributions does your framework make that advance the field of molecular property prediction beyond previous approaches?\n\n3. Why was the QM9 dataset not included in your experimental evaluations? Given its significance in the field, do you plan to evaluate ChemThinker on this dataset in future work?\n\n4. In the results section, could you clarify how ChemThinker's performance compares to recent state-of-the-art models? To strengthen your claims of superior performance, consider including comparisons with a broader range of state-of-the-art models, particularly those referenced in the literature (e.g., studies by Ross et al. and Soares et al.)."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "ChemThinker stands out for its application of a multi-agent framework within large language models (LLMs) to enhance interpretability in molecular property prediction. The approach draws inspiration from the analytical methods used by chemists, applying a novel multi-perspective representation structure to guide the model’s focus on relevant features. This method represents an expert knowledge within LLM capabilities, allowing for a highly targeted interpretability that mirrors real-world chemist reasoning. The integration of LLMs for molecular analysis through a structured, interpretative framework also pushes the boundaries of LLM application in cheminformatics."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "ChemThinker leverages a multi-agent structure within large language models (LLMs) to control and interpret the internal representations of molecular concepts and functions. It mimics the way chemists analyze molecules by incorporating insights from three perspectives: general molecular properties, data-driven analysis, and task-specific factors. Each perspective functions as an agent, guiding the model’s internal representation to generate more interpretable and targeted predictions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The method proposed in ChemThinker does not introduce significant innovations compared to existing approaches. The use of LLM embeddings in conjunction with multi-layer perceptrons (MLPs) for molecular property prediction is not a novel concept.\n\nThe multi-agent framework is central to ChemThinker’s design, but the rationale behind choosing three specific perspectives—general molecular properties, data-driven analysis, and task-specific factors—could be clarified. Why were these perspectives selected, and are they mutually exclusive or interdependent in practice? The authors could include a discussion or ablation study examining the impact of each perspective on predictive performance and interpretability to validate their choices.\n\nThe experimental results presented in the paper do not demonstrate that ChemThinker achieves state-of-the-art performance in molecular property prediction across both classification and regression benchmarks. For a comprehensive comparison, the authors should consider the results reported in the following studies:\n\nRoss, Jerret, et al. \"Large-scale chemical language representations capture molecular structure and properties.\" Nature Machine Intelligence 4.12 (2022): 1256-1264.\nSoares, Eduardo, et al. \"A Large Encoder-Decoder Family of Foundation Models For Chemical Language.\" arXiv preprint arXiv:2407.20267 (2024).\nThese references provide significant insights into current methodologies and benchmark performances in the field. Additionally, incorporating the QM9 dataset into their experimental framework would enhance the robustness of their evaluation and allow for a more direct comparison with existing models that utilize this widely recognized benchmark in molecular property prediction."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "Here, LLM agents appear to be used to generate embedding of molecular descriptors. However, these are general-purpose LLMs that can be simultaneously used by bad actors to propose ways of bringing harmful chemical entities into the world. The authors should address this possibility."
},
"flag_for_ethics_review": {
"value": [
"Yes, Potentially harmful insights, methodologies and applications"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "No questions, really. Please rewrite the manuscript with a) clear statement of the task you are solving, b) clear statement of the systematically improvable aspect of the modeling, and without a) nebulous verbiage about outstanding capabilities arising from LLMs in-context learning and multi-agency."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "I'd love to pinpoint a strength, I just can't find one."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The contribution introduces multi-agent LLM framework for chemistry that is intended to control the internal representations of concepts and functions within LLMs. The agents are initialized in such contexts that should emulate subject matter experts' approach to molecular analysis. The described agents offer insights into general molecular properties, data-driven analysis, and task-specific factors. Each perspective. Accumulated representations from the agents are processed by multi-layer perceptron (MLP)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper is poorly written. It is full of typos that should be caught by any spellchecker.\n\nThe paper claims to mirror the way of chemist's thinking. It fails to define the persona of a \"chemist\", though. This is important because chemistry is an exceptionally broad field that absolutely does not reduce to analysis of molecular structure.\n\nWhat is \"general molecular thinking\" in Section 3.1? Traditional computational tools are absolutely going beyond calculating simple chemoinformatic descriptors - there's a huge field of quantum chemistry and molecular simulations that can handle really complicated chemical scenarios. The entire semantic structure of the section is so nebulous, it raises a question as of the authorship of the text. There's no single technical, reproducible, systematically improvable description of the respective agent.\n\nThe questions provided as examples barely makes sense for a trained SME. For example, \"How does the molecule’s 3D shape change, and what are the effects of these changes?\" cannot have a meaningful answer - molecules exist as Boltzman distributions of 3D configurations and the shifts of these distributions depend on the external conditions that have to be specified.\n\nThe same ambiguity and lack of technical substance characterizes sections 3.2 \"intuition-driven thinking\" and 3.3 \"task-specific thinking\". \n\nSection 3.4 \"thought-representation fusion\" describes a fusion procedure that exists completely out of context of the previous sections. Representations of molecular structure descriptors, such as SMILES, and representation of arbitrary reasoning utterances about SMILES are different entities and the authors clearly do not understand this.\n\nThe performance benchmarks suggest that the paper aims to solve simple supervised learning tasks in QSAR/QSPR domain. The performance sees some improvement compared to alternatives, but it should be noted that this moderate improvement is achieved by increasing computing footprint by the factor that equals the number of agents. This looks a lot like a case of diminishing returns.\n\nSome of the data in the tables are incorrectly labeled to favor the reported model. For example, in Table 2, ESOL(1), performance of LLM4SD should be ranked better than ChemThinker (Gallactica). There are several instances where comparison of the models in terms of average performance might look different from the comparison of the performance distributions (Table 1: BBBP ChemThinker OpenAI vs LLM4SD; SIDER ChemThinker OpenAI vs UniMol, etc)\n\nOne of the not-so-obvious aspects of this work is that while it is reporting regression and classification of molecules expressed in some electronic format, such as SMILES, it does employ LLMs that have full capability to help bad actors and bring harmful items into the physical world, by proposing synthetic routes. Unfortunately, the authors don't seem to realize that the simple feature-engineering task accomplished by LLM agents places their work in a much broader context that requires ethical guardrails."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "Reporting a ROC-AUC of 99.4% at predicting the clinical toxicity of a molecule can have severe consequences: such high values indicate that the predictions are reliable, and molecules predicted negative might be considered as safe, and non-toxic to humans and might be used in a clinical trial. Thus, potential harm might be done to humans if such incorrect high values performance values are reported."
},
"flag_for_ethics_review": {
"value": [
"Yes, Other reasons (please specify below)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "See box above."
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The general topic of molecular property prediction and its interpretability is important."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose a molecular property prediction method based on using and combinding LLMs' representations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "# A) Lack of clarity and false and tenuous claims.\n\nThe work completely fails to embed itself into relevant related work. It provides a highly flawed and biased view on the field and misleads readers in many ways. The first paragraph cites completely inappropriate papers, such as (Yang, 2019) for \"molecular property prediction\". Molecular property prediction is a decade-old field an here, for example, papers by Corwin Hansch [1] should be cited. Neural network entered the field in the 1990s [2], and deep learning methods were used from 2014 on [3-6]. Toxicity prediction with deep learning methods started around 2015/2016 [6-7]. The authors should completely re-write the first paragraph and embed their work properly, and carefully assign credit to pioneering works in the field of molecular property prediction. \n\nAls the second paragraph mentions \"current approaches (Xia et al., 2022; Liu et al., 2022; Luo et al., 2024; Rollins et al., 2024)\", which is unclear: does this mean \"currently best performing methods\" (in this case, the claim is false), or \"current LLM-based approaches\". Furthermore, current best-performing approaches for molecular property prediction are hybrid methods that combine deep neural networks on descriptors with GNN-based representations (like Yang, 2019) and those are not \"statistical pattern\" based. The authors should completely re-work the second paragraph to make its theme clear and cite appropriate works. \n\nThe authors claim that LLMs perform \"reasoning\" which is highly debated [8] and questionable. If the authors insist on the point that LLMs indeed perform reasoning, they should provided substantial evidence for that and also cite works critizing the reasoning capabilities of LLMs.\n\nThe authors claim that \" [.fingerprint-based methods.] they normally rely on pre-defined fingerprint and may not fully capture the complex patterns [...]\", while it is rather the other way around: ECFP-based methods are as expressive as 1-WL, while many GNNs are not. SMILES-based approaches often lose stereo-chemical patterns. The authors should rectify this false claim.\n\n\n# B) Significance: The significance is very low due to inappropriate experiments and missing compared methods.\n\nThe authors perform molecular property only on eight tasks, while usually methods perform this on 1000s of tasks (e.g. [9,10]). Additionally, the MoleculeNet benchmarks are outdated and should not be used anymore [11]. There is also for sure an error in evaluation the ClinTox task because it is impossible to predict clinical toxicity of a molecule at 99.4 ROC AUC. Clinical toxicity is a very complicated end-point that depends on a lot of factors of the individual person, other medication, genetic composition, etc, such that even experimental replicates would never reach this quality. Reporting such high values at predicting clinical toxicity also leads to ethical concerns (see below). \n\nThe compared methods should also include several descriptor-based approaches. Since ChemThinker is an LLM-based approach, also other LLM-based method, such as KV-PLM [12] and CLAMP [13] should be compared. \n\nSince the authors claim in the abstract that their method is interpretable, they should rather not compare predictive performance, but focus on interpretability metrics, e.g., whether the toxicity-inducing parts of the molecules could be identified. \n\n# C) Originality: \n\nUsing LLMs and combining representations from different pre-trained models is already known. However, what could indeed provide some additional value is prompting LLMs to think about the general molecular structure. \n\n\n# D) Technical errors: \nThe overall approach is ad-hoc. It is unclear why exactly these components should be put together in the suggested way. The authors should propertly motivate and justify their approach and design decisions. They should perform an ablation study to identify which components yield advances in predictive performance or interpretability. \n\nIt is unclear whether LLMs can properly handle the SMILES strings as input. Usually the tokenizer is inappropraite for molecules. A single molecule can have many different SMILES representations, which could lead to different internal representations of the LLMs. Furthermore, stereochemistry is often lost in the SMILES strings. The authors should perform SMILES augmentation to show that their method is invariant to permuting SMILES strings. The authors should also include several stereo-isomers to show that ChemThinker can meaningfully distinguis stereo-isomers.\n\nThe overall training objective is not mentioned. It is unclear whether only the MLP is trained or whether the whole architecture is fine-tuned. The authors should present their approach clearer, write down the objective function, considered and selected hyperparamters, training and implementation details. \n\nTypos:\n- Chapter 2 \"Reltaed work\"\n\nReferences:\n[1] Hansch, C., Maloney, P. P., Fujita, T., & Muir, R. M. (1962). Correlation of biological activity of phenoxyacetic acids with Hammett substituent constants and partition coefficients. Nature, 194(4824), 178-180. \n[2] Huuskonen, J., Salo, M., & Taskinen, J. (1998). Aqueous solubility prediction of drugs based on molecular topology and neural network modeling. Journal of chemical information and computer sciences, 38(3), 450-456. \n[3] Unterthiner, T., Mayr, A., Klambauer, G., Steijaert, M., Wegner, J. K., Ceulemans, H., & Hochreiter, S. (2014, December). Deep learning as an opportunity in virtual screening. In Proceedings of the deep learning workshop at NIPS (Vol. 27, pp. 1-9). Cambridge, MA. \n[4] Dahl, G. E., Jaitly, N., & Salakhutdinov, R. (2014). Multi-task neural networks for QSAR predictions. arXiv preprint arXiv:1406.1231. \n[5] Lusci, A., Pollastri, G., & Baldi, P. (2013). Deep architectures and deep learning in chemoinformatics: the prediction of aqueous solubility for drug-like molecules. Journal of chemical information and modeling, 53(7), 1563-1575. \n[6] Unterthiner, T., Mayr, A., Klambauer, G., & Hochreiter, S. (2015). Toxicity prediction using deep learning. arXiv preprint arXiv:1503.01445. \n[7] Mayr, A., Klambauer, G., Unterthiner, T., & Hochreiter, S. (2016). DeepTox: toxicity prediction using deep learning. Frontiers in Environmental Science, 3, 80. \n[8] Wang, B., Yue, X., & Sun, H. (2023). Can ChatGPT defend its belief in truth? evaluating LLM reasoning via debate. arXiv preprint arXiv:2305.13160. \n[9] Mayr, A., Klambauer, G., Unterthiner, T., Steijaert, M., Wegner, J. K., Ceulemans, H., ... & Hochreiter, S. (2018). Large-scale comparison of machine learning methods for drug target prediction on ChEMBL. Chemical science, 9(24), 5441-5451. \n[10] Stanley, M., Bronskill, J. F., Maziarz, K., Misztela, H., Lanini, J., Segler, M., ... & Brockschmidt, M. (2021, August). Fs-mol: A few-shot learning dataset of molecules. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). \n[11] Walters, P. (2023). We need better benchmarks for machine learning in drug discovery. Practical Cheminformatics. \n[12] Zeng, Z., Yao, Y., Liu, Z., & Sun, M. (2022). A deep-learning system bridging molecule structure and biomedical text with comprehension comparable to human professionals. Nature communications, 13(1), 862. \n[13] Seidl, P., Vall, A., Hochreiter, S., & Klambauer, G. (2023, July). Enhancing activity prediction models in drug discovery with the ability to understand human language. In International Conference on Machine Learning (pp. 30458-30490). PMLR."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "My main request is for more detail to be included in section 3 specifically to show the reader what the orchestration of the LLMs are actually doing. I would want to understand what kind of questions to ask to get the best performance (and how they were reached), how to get task specific insights and how to generate inputs for RDkit. Sections 3.1, 3.2, and 3.3 would be really useful, but in their current state are lacking enough information to adequately replicate any sort of results. \n\nFurther specific questions:\n\n* Why were the results only shown for the top 2 backbones in the tables, I would have expected to see all results there to better understand the properties of each model \n* Figure 3 was excellent - but I would like to see cleaner presentation, certainly flat bars if you think it should be a plot, but crucially the lack of numerical values to extract made analysis of these results impossible. Maybe consider a table as I think this result is great!\n* In the conclusion and introduction you make claims about providing insights into the predictions, it would be good to see more evidence of these insights. Perhaps these are included in the appendix and were missing in the submission. \n* I also have questions about data leakage - many of the molecules in the dataset have well know properties and how much the LLMs are reporting the answers may not be reflective of performance on the benchmark. On one hand this doesn’t matter as the method is demonstrating how to extract this information from the models, but on the other the comparison against other SOTA models could be misleading in these cases. How do you think this could impact the results?\n * I had specific concerns in Table 1 for ClinTox where the results are reaching over 99.0, and in Table 2 for ESOL where the result of 0.44 is significantly lower than all other results. (NB: LLM4SD gets closest at 0.52)\n * More discussion on these results would add confidence to the results."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "* Taking advantage of the knowledge contained in LLMs is a clear path for lots of scientific work and creating evaluations / proof of use cases is vital to increase adoption of this kind of approach. \n* The analysis of the way in which different models / lines of investigation contribute to solving each task offers insight into how the tasks benefit relatively from factual information learned during the pre-training stage of the LLM vs numerical information obtained through RDKit. \n* Evaluating over multiple backbones for the ChemThinker framework is good and starts to give insight into which models are strongest in different settings."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "I thank the authors for a really interesting read, extracting chemical insights from pre-trained LLMs is a great idea, and the amount of work done is clear to see. \n\nSummary:\n* The authors present the ChemThinker framework - a method to orchestrate large language models to extract meaningful insights \n* They use three strands of investigation - one looking at general molecular properties, one on the task specific properties, and one looking at “intuition”\n* The core idea is that LLMs pre-trained on huge quantities of high quality data often includes many scientific papers and textbooks, by prompting the models appropriately much of this information can be extracted for individual molecules / tasks. \n* These textual properties are synthesised through a series of calls to the LLM to iteratively improve the outputs, and then these embeddings are combined and used as inputs to the final MLP layers for downstream tasks. \n* Of particular interest was the component contribution analysis where the authors showed how different tasks relied on the different LLM pathways with different strengths,
\n* This paper shows an interesting approach to extracting the knowledge contained in LLMs for scientific work - as demonstrated here, this is clearly a promising avenue."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The exact orchestration of LLM calls was hard to extract from the paper. I was unsure if the processes were a single string of pre-defined prompts to extract answers, or if as was suggested in Fig 1. There was more iteration involved to obtain the final answers. \n* The questions shown (if they are the full extent of the prompts) could be improved / expanded. It prompts the question of why only 3? Why those three in that exact phrasing? \n* The process was less transparent for the intuition driven thinking in 3.2 - I can tell the LLMs are given a persona to derive “rules” but no examples of this are shown. Given that these rules are used to control which RDKit features are generated this makes it hard to understand exactly what is being done here. \n* I had a similar problem with the task specific thinking in 3.3 - It is not clear where the tailored insights to T are coming from / how that is being prompted of the LLM. An example is said to be in the appendix, but the appendix was missing from the PDF submitted to open Review. \n* It is unclear the typical size of the total concatenated embedding vector that is extracted per molecule, the number of calls to each LLM to generate this, the prompt structure used for each “agent” and the size of the final MLP. Personally I would want to see these details to place the evaluation in context.
\n* The comparison to other work was fairly extensive - but due to the limited detail on the ChemThinker pipeline / model configuration this made the comparisons harder to understand."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024chemthinker,\ntitle={ChemThinker: Thinking Like a Chemist with Multi-Agent {LLM}s for Deep Molecular Insights},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zlAUnwhE2v},\nnote={under review}\n}"
},
"abstract": {
"value": "Molecular property prediction is vital in drug discovery and cheminformatics, yet many current models lack interpretability, making it difficult for experts to understand the rationale behind predictions. To address this, we introduce ChemThinker, a novel large language models (LLMs) multi-agent framework designed to effectively control the internal representations of concepts and functions within LLMs. ChemThinker emulates the way chemists approach molecular analysis by integrating insights from three perspectives: general molecular properties, data-driven analysis, and task-specific factors. Each perspective uses an agentic approach to stimulate the LLM's internal representations, enabling more targeted and interpretable outputs based on the problem at hand, akin to how stimuli trigger the brain's cognitive processes. By feeding representations from these three perspectives into a simple multi-layer perceptron (MLP), ChemThinker achieves superior performance, significantly outperforming existing baselines across multiple benchmarks. Furthermore, our framework provides interpretable insights into the molecular mechanisms driving the predictions, making it a practical tool for drug discovery and other cheminformatics applications."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Molecular Property Prediction",
"Molecular Representation Learning",
"Multi-Agent LLMs"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/c7dcbfb496a3c22980f882a458f9130b28d28797.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to physical sciences (physics, chemistry, biology, etc.)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/9adbadc0a1fd5b8688887ebfe58bebd324cba792.pdf"
},
"title": {
"value": "ChemThinker: Thinking Like a Chemist with Multi-Agent LLMs for Deep Molecular Insights"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zmHqlXGTTl | SciPG: A New Benchmark and Approach for Layout-aware Scientific Poster Generation | main | Active | Scientific poster generation;multimodal extraction;multimodal generation | datasets and benchmarks | 3;6;6;6 | 4;3;3;4 | 3;3;3;3 | 2;3;3;3 | 3;2;3;3 | 5.25 | 3.5 | 3 | 2.75 | 2.75 | -0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- About baselines. For instance, In Section 2.3 the paper describes that previous methods focus on layout or composition. Would it be possible to select a random layout and allow the baseline to generate its content? The same things applies the other way around, evaluating how the other baseline generates the layout. It would be interesting to compare both tasks separately.\n\n- Can you elaborate more on why the KL term is needed in the generative loss? Authors mention \" to prevent the model from\nbecoming overconfident.\" Some more detail would be appreciated.\n\n- Authors define an Adaptive Memory mechanism to manage long range dependencies. Is this done because of conext length limitations in the Bert encoder? In that case, could this adaptive memory be avoided with sufficient context length? \n\n(see other points commented in Weaknesses)\n\nFormat Questions\n- I noticed you should use more the \\citep{} command, when citing several works. They will appear better in the paper.\n- Did the authors change some of the margins of the template? Some parts are quite to packed with content, seems too much use of vspace."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper is generally well written and easy to follow.\n- It develops new dataset with strong level of processing and curation. It also provides a good level of documentation on how to reproduce the processing pipeline. This dataset will be of great benefit to the community.\n- The proposed method seems effective in addressing the limitations of previous works i.e. they focus on independently generating layout or content, and the proposed method performs both tasks.\n- The paper has substantial experiments and ablations of the method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper is focused on the task of scientific poster generation, and introduces two novel contributions in the field. First, a dataset of 10k examples with pairs of papers and posters (SciPG), with relevant annotations about content and layout. Second, they propose a new method for poster generation, designed for jointly generate the layout and the content of a poster. The method first extracts relevant texts and images from an input document (paper), by computing CLIP and RoBERTa embeddings, for images and texts respectively, and predicting an extractive score. They keep top-k from the predicted scores. Second, they leverage a BART model to process multimodal inputs and generate the summarized (paraphrased) texts and layout of the poster."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Authors claim this is a large-scale dataset, which might be not the best term to use, given the dimension of other datasets considered large scale (in the millions). A medium size dataset is more well suited.\n- The paper does not argue about PosterLayout [1]. Even though it does not focus on scientific poser generation, it is good to have it as reference.\n- There is just one baseline to compare. I understand this is a new task but authors should be absolutely certain that there are no other possible baselines. (See my question about this later)\n- The qualitative samples in Figure 4 show that the texts are overlapping with each other, showing that the method has quite room for improvement. \n- The human evaluation is done over 3 humans, which seems a rather small portion in order to draw conclusions.\n- \n[1] Hsu, Hsiao Yuan, et al. \"Posterlayout: A new benchmark and approach for content-aware visual-textual presentation layout.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "NA"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Is it possible to incorporate prompts to generate the posters, allowing for personalized control by users over the final output?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.\tThis work explores an interesting task called layout-aware scientific poster generation, which is useful for generating flexible posters from scientific papers through integrated automatic content extraction and layout design.\n2.\tA large-scale dataset is created, containing over 10,000 pairs of scientific papers and their corresponding posters.\n3.\tExtensive experiments are conducted to evaluate both the qualitative and quantitative performance of the proposed approach.\n4.\tPractical issues, such as GPU memory consumption and long-term dependencies, are considered and addressed in this paper."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the layout-aware scientific poster generation (LayoutSciPG) task. Specifically, it addresses three challenges: multimodal extraction, multimodal generation, and the need for large-scale training data. To tackle LayoutSciPG, the authors develop a multimodal extractor-generator framework that includes extraction and interactive generation modules. Overall, the proposed solution is sound and reasonable. Additionally, a novel poster dataset has been constructed."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tAlthough the paper claims to present a novel research task, the research novelty is not particularly significant, as poster design has been extensively studied.\n2.\tThe technical contributions are marginal, as most techniques have been developed and are commonly used. For instance, the multimodal extractor (MDE) is based on RoBERTa and BiLSTM, while the interactive generator (IG) relies on BART and RMT. The developed framework is relatively straightforward. The authors could better highlight their differences to better highlight its technical novelties.\n3.\tThe experimental results are not convincing enough, as they only compare with one baseline, AdaD2P. Including comparisons with more recent advanced baselines would strengthen the advantages and make the empirical results more persuasive.\n4.\tAs shown in Figure 4, the generated posters are still poor and not suitable for practical applications. It would be beneficial to present and compare posters generated with other baselines."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "#### Comments\n\nThe title of the paper is SciPG, but this term is not defined anywhere in the text. Only from Table 1 do we know that this is the name of the dataset. This leads to confusion especially since a similar term (LayoutSciPG) is used very often.\n\nImgR and ImgP are not clearly defined. While it can be understood from the context, being more precise could alleviate confusion.\n\nTable 2 is a bit hard to read. Which column belongs to documents and which one to posters? Adding vertical bars or cmidrule's could help.\n\n#### Questions\n\nOne thing I don't understand is the role of the BiLSTM on top of the Roberta embeddings. The authors claim that this \"captures contextualized representations\" (l.257) but the Roberta embeddings should already be contextualized. Unfortunately the contribution of the LSTM has not been investigated in the ablation study.\n\nPosters might contain original content (e.g. images that do not appear in the paper as mentioned in l.183). Do the authors have an idea how that could be addressed in future work?\n\nYour dataset contains a validation split but do you actually use it somewhere? Maybe for the experiments in Figure 2?\n\nDo the authors think that a perceptual image similarity metric [1] between generated and reference posters could be a useful addition to the automatic evaluation?\n\n[1] DreamSim: Learning New Dimensions of Human Visual Similarity using Synthetic Data"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The provided dataset is orders of magnitudes larger then existing related datasets and will surely be useful to other researchers. The authors promise to release code and data artifacts.\n\nThe proposed architecture is also very novel and constitutes a core contribution of this work. Even though there are a lot of newly introduced custom components, the authors quantify the contribution of each component in an ablation study.\n\nThe human evaluation supports the findings from automatic evaluation and increases their credibilty."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors define the task of layout-aware scientific poster generation (LayoutSciPG) which takes both content extraction and layout into account (compared to previous work which looks at these things in isolation). To facilitate data-driven approaches the authors first collect a novel dataset (SciPG) of paper-poster pairs and automatically align text and image contents. The authors then introduce a novel two-stage pipeline architecture tailored to LayoutSciPG and finetune it on SciPG). Both in automatic and human evaluation the authors showcase the effectiveness of their approach to automatically generate scientific posters conditioned on papers."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The main experimental results are a bit lacking due to the lack of baselines (only one baseline provided). While I understand that the task is a novel one, I don't see why the approaches that tackle content extraction and layout in isolation mentioned in the introduction and related work couldn't serve as baselines. Providing a simple end2end baseline (or maybe a diffusion-based one) would also have been insightful.\n\nThe paper definitely needs more examples. As of know only two (bit hard to read) examples are provided in the main text. The authors should provide more examples in the appendix.\n\nWhile the paper is generally easy to follow some parts don't feel very polished and can potentially be confusing. E.g., there are are passages with duplicated information (l. 45-51 & l. 125-130) and sometimes the paper is short on details (l. 387-388 & l. 395-397)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "For the discussion in Section 4.3.2, what is the total token length for this task?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.\tIt focuses on a very important and practical problem – scientific poster generation. This problem receives little attention currently. If it is well-solved, lots of people will benefit from it.\n\n2.\tThe paper is easy to follow. The idea is clearly expressed.\n\n3.\tIt contributes a new dataset for scientific poster generation, which will have big impact on this domain if released."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work studies an important problem – automatically generate a poster from a scientific paper. It contributes a dataset with 10k pairs of scientific papers and their corresponding posters. It proposes an extractor to retrieve critical text and image elements from the papers and a generator to paraphrase the extracted content and generate a layout."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe reason behind the model architecture choice is not clearly explained. \n\na) Why is BiLSTM instead of Transformer used in multimodal extractor? The other components in the framework are all based on Transformer. What is the reason for such a special design that only uses BiLSTM in this part? \n\nb) In the interactive generator, what does ‘interactive’ refer to here? Does it mean users can participate in this process? \n\n2.\tThe evaluation for the layout part is not sufficient. Layout is important for a scientific poster as it relates to whether the poster is attractive and conveys information effectively. \n\na) There are lots of studies focus on the layout generation problem [1][2]. They take the elements as input and generate their positions. The comparison with them could be using the output of the multimodal extractor as their input. If the performance of the existing layout generation model is worse than the proposed interactive generator, I will be convinced that it is necessary to design a specific interactive generator; otherwise, the novelty and contribution of this work will be very limited. \n\nb) Important metrics for layout are missing, e.g., FID and alignment used in existing work [1][2].\n\n3.\tThere is no discussion and ablation study about whether the problem decomposition is reasonable. Currently, the problem is decomposed into two parts, where the first part is to extract key text and image from the paper and the second part is to paraphrase text as well as generate layout. Why not put the paraphrase task a separate part or merge it into the first part, since it is a natural language processing task and is far from layout generation problem? Besides, if the task is decomposed as I suggested, the existing techniques [1][2] about layout generation can be reused, which may be beneficial for overall performance.\n\n4.\tThere are only two qualitative results, which are not enough for justifying the performance of the proposed method. Besides, from the qualitative results shown, I can find many overlaps between elements, which indicate that the performance is not good enough.\n\n[1] LayoutFormer++: Conditional Graphic Layout Generation via Constraint Serialization and Decoding Space Restriction\n\n[2] PosterLlama: Bridging Design Ability of Language Model to Contents-Aware Layout Generation"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A New Benchmark and Approach for Scientific Poster Generation"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024scipg,\ntitle={Sci{PG}: A New Benchmark and Approach for Layout-aware Scientific Poster Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zmHqlXGTTl},\nnote={under review}\n}"
},
"abstract": {
"value": "Scientific posters are an effective and expressive medium for conveying the core ideas of academic papers, facilitating the communication of research techniques. However, creating high-quality scientific posters is a complex and time-consuming task that requires advanced skills to summarize key concepts and arrange them logically and visually appealingly. Previous studies have primarily focused on either content extraction or the layout and composition of posters, often relying on small-scale datasets. The scarcity of large, publicly available datasets has further limited advancements in this field.\nIn this paper, we introduce a new task called layout-aware scientific poster generation (LayoutSciPG), which aims to generate flexible posters from scientific papers through integrated automatic content extraction and layout design.\nTo achieve this, we first build a large-scale dataset containing over 10,000 pairs of scientific papers and their corresponding posters. We then propose a multimodal extractor-generator framework, which employs a multimodal extractor to retrieve key text and image elements from the papers and designs an interactive generator with an adaptive memory mechanism to seamlessly paraphrase the extracted content and generate a structured layout. This approach effectively tackles challenges related to GPU memory consumption and long-term dependencies when handling the lengthy inputs (scientific papers) and outputs (posters). Finally, both qualitative and quantitative evaluations demonstrate the effectiveness of our approach while highlighting remaining challenges."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Scientific poster generation",
"multimodal extraction",
"multimodal generation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/e45f894b696839ab37a91c7131ddd57fee53b27d.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "SciPG: A New Benchmark and Approach for Layout-aware Scientific Poster Generation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zmmfsJpYcq | IgGM: A Generative Model for Functional Antibody and Nanobody Design | main | Active | de novo antibody design;complex structure prediction;protein design | applications to physical sciences (physics, chemistry, biology, etc.) | 3;5;5;8 | 4;4;4;4 | 3;2;3;2 | 2;2;2;4 | 2;3;3;3 | 5.25 | 4 | 2.5 | 2.5 | 2.75 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. It was observed in Table 1 that HDock does not have an asterisk (*) symbol. Does this mean that Hdock did not use the information of the epidemic, resulting in lower indicators such as DockQ?\n2. ESM-PPI is a model retrained based on Sabdab data. Will the training data of ESM-PPI appear in the validation set of IgGM? Has this part been considered?\n3. As shown in Table 1, IgGM (AF3) uses the structure predicted by AF3 as the initial state. However, the authors do not clearly specify whether the sequence input to AF3 is the ground truth sequence. If it is, this would indirectly leak the answer, leading to inflated metrics for both the subsequent structures and sequences. A more rigorous approach would be to use 'initial state sequence + AF3 (random initial state sequence)' instead of 'random initial state sequence + AF3 (ground truth sequence)'"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The innovation of the article lies in further extending the scope of co-design. The model can not only design amino acids in the CDR region, but also complete antigen docking, which is unprecedented. IgGM not only has excellent performance in predicting antibody structure, especially in the CDR region, but also outperforms existing algorithms in docking. IgGM not only shows SOTA performance on conventional antibodies, but also performs well on nanobodies, further demonstrating the application value of IgGM."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The article discusses the challenges in practical applications where obtaining the structures of antigens and antibodies (including the framework region) is often unfeasible. To address this, it introduces an end-to-end algorithm called IgGM, which simultaneously predicts the sequences and structures of the CDR regions, performs docking based on the epitope, and predicts the structure of the antigen-antibody complex. Additionally, the article employs a two-stage training method to enhance the model's performance in predicting both structures and sequences."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The core architecture of the algorithm is very similar to AF2, and the accuracy of structural prediction seems to be attributed to AF2. However, the innovation in this part is not strong enough.\n2. The article did not elaborate on the introduction and explanation of the Inter chain Feature Embedding Module and Structure Encoder.\n3. In line 343, the authors define the success rate as DockQ > 0.23. The authors should either provide a reference to justify the selection of this threshold or elaborate on the rationale behind it.\n4. From the overall text, it appears that the length of the antibody CDR regions is also specified; however, the authors do not clarify this in the problem formulation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. It would be valuable to explore why AlphaFold 3 (AF3) shows stronger performance in structure prediction than in docking-related metrics, perhaps due to limitations in capturing finer antigen-binding dynamics.\n2. Why did the authors opt not to use RAbD as the test set, as has been customary in prior studies?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. IgGM successfully establishes an antibody co-design pipeline that integrates several key elements previously deemed essential by the research community.\n2. IgGM shows promising docking success rates over AF3, suggesting that it effectively captures essential antibody-antigen interaction patterns."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces IgGM, a generative model designed for creating functional antibody and nanobody structures. It integrates sequence and structural generation using a multi-level network approach comprising a pre-trained language model, feature encoder, and prediction module. Experimental results demonstrate its applicability in antibody and nanobody design tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Although the IgGM framework demonstrates its effectiveness on various antibody design tasks, it relies heavily on components and algorithms established in prior work, limiting the originality and impact of its contributions to the field.\n2. The study lacks direct comparisons with RFDiffusion while they perform similar tasks\n3. The authors didn't report the variance/robustness of their performance."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. In line 278, the probabilities of 4:2:2:2 are assigned to the model to design CDR H3, CDR H, and all CDR. So, do you mean CDR-H3 only for 4/10, all CDR-Hs for 2/10, and all CDRs for 2/10 ?\n\n2. I am confused as to why you ultimately trained a consistency model. I comprehend that the key advantage of a consistency model lies in accelerating the generation process, enabling completion in fewer steps. However, in a scientific problem such as antibody design, is acceleration truly significant? Related to this, you have employed 10 different sampling steps in the appendix, but aside from the success rate (SR), the other four metrics seem to decline with the increase in the number of steps. Even SR shows a decline after surpassing 10 steps. These experimental results differ from what we might anticipate; I would appreciate a detailed study and explanation. Additionally, how does the original model perform without using the consistency model?\n\n3. Some physical metrics (like energy) are needed for a more refined evaluation. \n\n4. The model name (DiffIg) in Figure 4 has not been updated. \n\n5. The experiment in Section 4.2 seems to be the most important one in this paper, but I am somewhat unclear about the experimental setup here.\n \n 5.1. 'Structure prediction⇒docking⇒CDR generation⇒side-chain packing' is not the process of dyMEAN. On the contrary, this is the 'Existing Works Pipeline' shown in Figure 1 to highlight the end-to-end approach of dyMEAN.\n \n 5.2. In the stage of structure prediction, how is the sequence of the CDR part defined? Are random sequences used, or are special symbols like [MASK] employed?\n \n 5.3. The AAR of dyMEAN on CDR-H3 is much lower than reported in the literature. Although the training and test data used in this paper differ from the original dyMEAN article, such a significant disparity needs to be explained (43.65%->29.4%).\n \n 5.4. In both Section 4.1 and 4.2, there is a version of IgGM that uses AF3-predicted structures for initialization, making the prior distribution inconsistent with N(0, I). How does IgGM predict the structure in this scenario? Was an additional IgGM trained with AF3 predicted structures as prior? If it's the latter, how were the noise addition and denoising processes conducted?\"\n\n6. Finally, for any researcher working on AI-driven antibody design, aligning in silico evaluation with wet-lab experimental results is an essential consideration. Unfortunately, I did not find any discussion related to this in the main paper. If IgGM were to be eventually validated experimentally, do you believe the advantages of IgGM over other methods would still persist? Also, what improvements could be made?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Compared to most existing antibody design works, this study undertakes the design of the antibody CDR region without providing the antibody framework, consistent with dyMEAN. This setting is closer to real-world requirements and poses a greater challenge. \n\n2. The two-stage training approach brings a significant performance improvement. \n\n3. Additionally, targeted designs focusing on inter-chain interactions were conducted."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces IgGM, a diffusion model for simultaneously generating the structure and sequence of antibody CDR regions, as well as the structure of antibody FR regions. By employing a two-stage training approach (folding -> design), IgGM is capable of conducting antibody design and docking tasks, and achieves commendable performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. It seems that the work does not fundamentally differ from DiffAb in terms of diffusion algorithm, and it bears some resemblance to ESMFold in model design. \n\n2. Having only five samples for each example is somewhat limited. \n\n3. The main paper does not provide an explanation of the diffusion algorithm, nor does it detail the representation of proteins and the prior distribution associated with each modality."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How is epitope information encoded in the model, and to what extent can the model follow the guidance of a specific epitope?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The authors made a comprehensive comparision between IgGM and other baseline methods cross multiple metrics, demonstrating impressive results.\n2. This paper is well-structued, with carefully prepared figures that enhance the reader's understanding."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces IgGM, a generative model for designing antibody/antigen structures and sequences for a given antigen. IgGM uses features from a pre-trained language model and a diffusion framework to generate antibodies and nanobodies. The model shows great performance in both structure prediction and antibody design."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Major comments:\n\n1. One major limitation is that the generation of antibodies depends on their framework regions, which predetermine the length of CDRs and may not always be available when handling a new antigen. Since the optimal CDR lengths are not known a priori, the authors should demonstrate the capability of IgGM in designing antibodies with varying CDR lengths.\n\n2. However, in line 147, the authors state that their method \"...enables the design of the whole antibody structure even without experimental structures.\" This claim requires clarification, as the results presented only demonstrate the ability to design CDRs.\n\n3. I suggest that the authors should compare their method with other antibody deisgn methods, such as:\n - Co-design models: AbX [1] (very similar framework with IgGM)\n - Structure-only models: RFdiffusion for antibody design [2]\n - Sequence-only models: AbLang [3] and IgLM [4]\n\nMinor comments:\n\n1. In figure 6, \"Aligin\" should be \"Align\"\n\nReferences\n\n[1] Zhu T, Ren M, Zhang H. Antibody Design Using a Score-based Diffusion Model Guided by Evolutionary, Physical and Geometric Constraints[C]//Forty-first International Conference on Machine Learning.\n\n[2] Bennett N R, Watson J L, Ragotte R J, et al. Atomically accurate de novo design of single-domain antibodies[J]. bioRxiv, 2024.\n\n[3] Olsen T H, Moal I H, Deane C M. AbLang: an antibody language model for completing antibody sequences[J]. Bioinformatics Advances, 2022, 2(1): vbac046.\n\n[4] Shuai R W, Ruffolo J A, Gray J J. IgLM: Infilling language modeling for antibody sequence design[J]. Cell Systems, 2023, 14(11): 979-989. e4."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Efficient and accurate design methods for antibody and nanobody sequences and structures tailored for real-world design scenarios."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024iggm,\ntitle={Ig{GM}: A Generative Model for Functional Antibody and Nanobody Design},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zmmfsJpYcq},\nnote={under review}\n}"
},
"abstract": {
"value": "Immunoglobulins are crucial proteins produced by the immune system to identify and bind to foreign substances, playing an essential role in shielding organisms from infections and diseases. Designing specific antibodies opens new pathways for disease treatment. With the rise of deep learning, AI-driven drug design has become possible, leading to several methods for antibody design. However, many of these approaches require additional conditions that differ from real-world scenarios, making it challenging to incorporate them into existing antibody design processes. Here, we introduce IgGM, a generative model for the de novo design of immunoglobulins with functional specificity. IgGM produces antibody sequences and structures simultaneously for a given antigen, consisting of three core components: a pre-trained language model for extracting sequence features, a feature learning module for identifying pertinent features, and a prediction module that outputs designed antibody sequences and the predicted complete antibody-antigen complex structure. IgGM has shown effectiveness in both predicting structures and designing novel antibodies and nanobodies, making it relevant in various practical scenarios of antibody and nanobody design."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"de novo antibody design",
"complex structure prediction",
"protein design"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/1317aff04106f65f2035ea9f20b6c322caa87d99.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to physical sciences (physics, chemistry, biology, etc.)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "IgGM: A Generative Model for Functional Antibody and Nanobody Design"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zn0eqMtsrw | GUD: Generation with Unified Diffusion | main | Active | diffusion models;renormalization group;autoregressive models;wavelet decomposition;denoising score matching | generative models | 3;6;6;6 | 4;3;4;4 | 3;3;4;3 | 2;4;3;3 | 3;3;3;3 | 5.25 | 3.75 | 3.25 | 3 | 3 | -0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weaknesses.\n\nThe time-dependent diffusion scale term has not been investigated in much detail as far as I am aware and I believe this should be the main focus of the paper or at least more attention. What are the benefits of this compared to cascading diffusion, can cascading be seen as a case of this?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is well explained. \n\nThe authors bring attention to the flexibility in the diffusion model paradigm, though as discussed below this has been discussed in many prior papers.\n\nThe authors introduce what I believe to be a novel interpretation and use case for time-varying diffusion scale timers, leading to an autoregressive type forward process, applying noise to separate components independently. A similar procedure was used for diffusion in frequency space by applying different diffusion noise scales per frequency level [1] but these were not set to 0 as described here.\n\n\n[1] Blurring diffusion models, Hoogeboom et al 2022"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors describe diffusion models with a under class of dynamics and marginals than the typical scaled Ornstein Uhlenbeck or Brownian motion reference processes used in the vast majority of diffusion model papers. In particular, the authors consider a linear transformation to perform the diffusion under a change of basis; varying the variance of the prior Gaussian marginal to match the data distribution; considering time dependent diffusion scale terms which can lead to auto-regressive-like dynamics."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "## Weakness 1\nWhile the authors attempt to unify the design of dynamics for references; two of the three ideas proposed are not novel so it is unclear what the main contributions of the paper are. \n\n1) Using a change of basis\nApplying diffusion in a transformed space / change of basis has been done before. Although [1] focuses on change of basis to frequency basis, section 4.1 of [1] explicitly explains how any other change of basis can be performed. I do not see any compelling evidence to suggest one basis over another in this submission.\n\n2) Prior distribution\nDiscussion of variance of prior distribution was first discussed in [3], and referred to as Technique 1. This is still using diagonal covariance. \n\nIt is not clear how scalable learning the covariance matrix for a full high dimensional data distribution would be or if it would even be beneficial.\n\nThe time-dependent diffusion scale term has not been investigated in much detail as far as I am aware and I believe this should be the main focus of the paper or at least more attention.\n\n## Weakness 2\nThe second major weakness is in limited numerical evaluation. The FID scores shown for CIFAR10 are >20; significantly far from standard diffusion model performance of <3. It is not possible to evaluate whether there any benefit to generative modelling for the proposed methods without compelling numerical support.\n\nWhilst I am not particularly interested in SOTA generative models FID <2, for toy datasets like CIFAR10 I would expect at least FID<4 given the abundance of code available for this and the limited novelty for 2/3 methods.\n\n## Weakness 3\nIt is not clear to me the theoretical soundness of using the autoregressive approach for extending existing images i.e. changing dimension from previously trained model. It seems the generative process is no longer related to the time reversal of an SDE given the dimension changes. Can this be formalised?\n\nMinor\n- Blurring diffusion models [2] was a follow up to inverse Heat Dissipation Generative Model [1]. This should be cited and discussed as it was a pioneering paper in this area.\n\n\n[1] GENERATIVE MODELLING WITH INVERSE HEAT DISSIPATION, Rissanen et al 2022\n[2] Blurring diffusion models, Hoogeboom et al 2022\n[3]Improved Techniques for Training Score-Based Generative Models, Song and Ermon, 2020"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Could the authors explain how to select in the large design space the various parameters/hyperparameters?\n\nCan the authors briefly position wrt the works like the ones cited in the weaknesses section?\n\nMinor: Figure 7 is qualitatively difficult to interpret from someone not specialized in the field. I suggest the authors to either add some extra comments or produce a similar image for a dataset which is more understandable for a generic reader."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The Generative Unified Diffusion (GUD) model provides a novel unification of diffusion and autoregressive generative approaches, allowing a flexible transition between simultaneous and sequential generation processes. This ability to bridge methods expands the framework’s application to a broad spectrum of tasks, from inpainting and sequential data extension to standard generative modeling. By creating a model that can interpolate between different generative styles, GUD allows developers to tailor the generation process to specific needs, enhancing control over the structure and dependencies of generated data.\n\nOne of GUD’s most notable strengths is its capacity for component-wise noise scheduling, which enables a hierarchical and selective approach to noising different parts of the data. This flexibility allows the model to prioritize important features by applying noise schedules tailored to specific components, leading to a more efficient and accurate generative process. Combined with its support for multiple basis representations—such as pixel, PCA, Fourier, and wavelet bases—GUD is adaptable to various data types and structures, making it particularly suitable for applications that benefit from multi-scale or hierarchical data representations.\n\nAdditionally, GUD’s design includes a whitening process, which aligns the data and noise distributions, providing better variance control throughout the generative process. This feature simplifies denoising and increases model stability, potentially reducing training time by minimizing noise-related artifacts. By supporting flexible basis selection, component-wise noise control, and variance alignment, GUD allows for refined generative modeling that can adapt to diverse tasks and applications, offering a powerful tool for high-quality, customizable data generation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work introduces the Generative Unified Diffusion (GUD) model, a framework that expands the flexibility of diffusion-based generative models by enabling diverse configurations in basis representation, noise scheduling, and prior distributions. Standard diffusion models transform noise into data through a learned reverse process. Here, GUD leverages concepts from physics, specifically renormalization group flows, allowing distinct configurations in the process, such as using Fourier, PCA, or wavelet bases, and implementing component-wise noise schedules to tune noise levels for different data parts.\n\nThe GUD framework unifies diffusion and autoregressive models, bridging differences between simultaneous and sequential generation. It introduces soft-conditioning, where the model can conditionally generate components based on previously generated data, enabling partial dependency across features. The approach supports more efficient training, flexible architectural designs, and tasks requiring conditional generation, inpainting, or sequential extensions.\n\nA key technical innovation is in the model’s flexibility of noise schedules and priors. GUD models allow each component a unique noise schedule, enabling a range of generation hierarchies from purely autoregressive (extreme component-wise scheduling) to standard diffusion. Additionally, a whitening transformation using PCA stabilizes the variance, simplifying the denoising process.\n\nExperiments demonstrate the framework's adaptability across various data representations, including PCA, Fourier, and wavelet bases. By controlling softness and hierarchical order in noise schedules, GUD supports both hierarchical and spatially sequential generation, showing improved performance on benchmark image generation tasks, like CIFAR-10."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The GUD framework is flexible, and consequently introduces significant computational complexity. Each configuration, such as basis choice (PCA, Fourier, wavelet) and component-wise noise scheduling, requires tuning, making the model resource-intensive. This complexity can hinder scalability, especially in high-dimensional data applications where each choice impacts the computational load.\n\nArchitecturally, GUD’s design adds complexity by requiring modifications like cross-attention mechanisms for conditioning on component-wise noise states. These additions complicate the implementation and increase the risk of instability during training, as standard architectures like U-Nets are not inherently optimized for GUD’s intricate conditioning needs. This limitation might however be not so crucial.\n\nFinally, I think the authors could have expanded the comparison with related works. As (non exhaustive) examples, non isotropic noise perturbation has been considered in [1] and optimal steady state covariance wrt the data distribution has been investigated [2].\n\n\n[1] Voleti et al, Score-based Denoising Diffusion with Non-Isotropic Gaussian Noise Models, NeurIPS 2022 Workshop on Score-Based Methods.\n\n[2] Das et al, Image generation with shortest path diffusion, ICML 2023"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Could the authors summarize the experimental results from Sec. 5.1, particularly regarding how the choice of basis, prior, and noising schedule contributes to performance compared with standard diffusion models?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "1. The paper addresses limitations in standard diffusion models by proposing an interesting and innovative Generative Unified Diffusion (GUD) model.\n2. The theoretical foundation of the paper is solid, and the presentation is clear.\n3. The analyses and designs within the GUD framework are novel and potentially valuable across multiple applications."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes Generative Unified Diffusion (GUD), an extension of standard diffusion models based on the Ornstein-Uhlenbeck process. By defining appropriate orthogonal transformations, the authors introduce novel analyses and designs within the GUD framework, including SNR analysis, soft-conditioning, whitening, and orthogonal transformations. The authors conclude with experiments that validate these designs, showcasing GUD's potential in various applications."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Limited empirical evaluation:** The experiments primarily serve to validate the proposed designs (pixel/PCA/FFT). While these results offer some insights, the evaluation lacks depth, particularly in quantifying each design’s impact on GUD's performance. More comprehensive quantitative and qualitative results would better demonstrate the effectiveness of each design.\n\n2. **Limited practical application contribution:** Although the paper suggests various potential applications, it appears these may not be fully viable in practice. Providing further insights into real-world application strategies would enhance the paper's practical relevance.\n\n3. **Missing discussion of related work:** One significant application of GUD is the component-wise scheduling for different states used in sequential generation (as outlined in Sec. 5.2). As a comparison, [1,2] also propose distinct schedules for different components. Could the authors discuss these related works or provide a comparative analysis within the GUD framework?\n\n[1] Rolling Diffusion Models, ICML 2024 \n[2] Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion, NeurIPS 2024"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "NA"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See above"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. **Unified Framework**: The authors present a cohesive framework for diffusion generative models, broadening design options.\n\n2. **Structured Components**: The framework is well-organized around the Ornstein-Uhlenbeck process, prior distribution choice, and component-wise noise scheduling, enhancing its theoretical foundation.\n\n3. **Diverse Design Examples**: The paper includes examples of various diffusion designs, demonstrating the framework’s flexibility and applicability."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a unified framework for diffusion generative models, inspired by renormalization concepts from physics, allowing for flexible design choices in representation, prior distribution, and noise scheduling. The framework introduces soft-conditioning models that blend diffusion and autoregressive approaches, potentially enabling more efficient training and versatile generative architectures."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Limited Experimental Scope**: The experiments are primarily conducted on CIFAR-10, a relatively small dataset. Datasets with larger images would better validate the findings and demonstrate the framework’s effectiveness in diverse settings.\n\n2. **Insufficient Explanation of Unified Diffusion and Autoregressive Generation**: The explanation on how the framework unifies standard diffusion and autoregressive generation lacks clarity. Providing a specific example of the component-wise noise schedule would enhance understanding and illustrate this unification more concretely."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce a unified diffusion framework that bridges diffusion and autoregressive models by integrating flexible data representations and component-wise noising schedules."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024gud,\ntitle={{GUD}: Generation with Unified Diffusion},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zn0eqMtsrw},\nnote={under review}\n}"
},
"abstract": {
"value": "Diffusion generative models transform noise into data by inverting a process that progressively adds noise to data samples. Inspired by concepts from the renormalization group in physics, which analyzes systems across different scales, we revisit diffusion models by exploring three key design aspects: 1) the choice of representation in which the diffusion process operates (e.g. pixel-, PCA-, Fourier-, or wavelet-basis), 2) the prior distribution that data is transformed into during diffusion (e.g. Gaussian with covariance $\\Sigma$), and 3) the scheduling of noise levels applied separately to different parts of the data, captured by a component-wise noise schedule. \n Incorporating the flexibility in these choices, we develop a unified framework for diffusion generative models with greatly enhanced design freedom. In particular, we introduce soft-conditioning models that smoothly interpolate between standard diffusion models and autoregressive models (in any basis), conceptually bridging these two approaches. \nOur framework opens up a wide design space which may lead to more efficient training and data generation, and paves the way to novel architectures integrating different generative approaches and generation tasks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"diffusion models",
"renormalization group",
"autoregressive models",
"wavelet decomposition",
"denoising score matching"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/7d845c143f2db3679b17bf4eb8c066bbdba8c7a8.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "GUD: Generation with Unified Diffusion"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
znGnmAM44K | The other you in black mirror: first steps from chatbots to personalized LLM clones | main | Active | Large Language Models (LLMs);Personalized AI;Turing Test;AI Safety | alignment, fairness, safety, privacy, and societal considerations | 3;5;5;5 | 4;4;4;4 | 2;3;2;2 | 2;2;2;3 | 2;3;2;3 | 4.5 | 4 | 2.25 | 2.25 | 2.5 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "I would like to thank the authors for mentioning the potential for undesirable conduct in the discussion. A tool like this can be useful in many domains where personalized information is valuable. For example, in sales, people might interact with chatbots designed to convince them to make a purchase. As noted in the paper, if this tool is used to create chatbots impersonating public figures, it could be exploited to scam individuals in various ways, such as through phishing emails. Generally, a personalized LLM is highly likely to introduce bias in the model's outputs. Therefore, it is crucial to implement fairness and bias detection measures to mitigate biased outcomes. This is something I believe the authors should address in their paper, and ideally, they should incorporate it into their experimental process. Another ethical consideration, which I have already mentioned in the section on weaknesses, is the omission of the concern raised by person A, who allowed the authors to use their data."
},
"flag_for_ethics_review": {
"value": [
"Yes, Privacy, security and safety"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "The questions I present in this section are closely related to the Weaknesses section where I share my thoughts, so please review both parts.\n \n 1) In lines 201-2023, you mention that the responses generated by the LLM in the first prompt were quite long and easily detected and that you later adjusted the length. Have you tried examining the correlation between person A's ground truth responses and A-clone's responses? For example, you could use metrics such as ROUGE-1, ROUGE-L, Persona F1, and Win rate, as suggested in the paper you referenced, \"Kai Zhang, Lizhi Qing, Yangyang Kang, and Xiaozhong Liu. Personalized LLM Response Generation with Parameterized Memory Injection\" (2024).\n \n 2) In lines 198-200, you state that, based on the questions you provided to A and A-clone, the test volunteers (such as A's family) had to predict which response belonged to A. Why didn’t you ask the test volunteers to come up with a set of questions they wanted to test, and then have both A and A-clone respond to those questions, incorporating them into the evaluation process? For example, in the paper you referenced, \"Daniel Jannai, Amos Meron, Barak Lenz, Yoav Levine, and Yoav Shoham. Human or Not? A Gamified Approach to the Turing Test\" (2023), they describe strategies that helped participants identify whether they were interacting with a chatbot or a human. Having such information for your model would provide deeper insights into its abilities.\n \n 3) In lines 203-208, you mention that you applied an SVM to determine if person A’s responses were easily recognized based only on length, with an accuracy of 0.48. Have you considered using a state-of-the-art classification model? In my opinion, this accuracy doesn’t provide much information, given that you used a simple classifier like SVM.\n \n 4) In line 770, regarding Table 2, you present some statistics related to the experiment. Did you perform any correlation tests to identify trends between the variables (e.g., gender, age range, etc.) and the test volunteers’ responses? For instance, it’s possible that individuals with a PhD level of education might be better at discerning whether something was generated by an AI."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "To the best of my knowledge, the main strength of this work is that it uses real email data spanning a 20-year period from a single individual. This provides a strong foundation for developing a realistic personalized LLM."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The researchers behind this paper developed an LLM model called A-clone, which is built on pretrained LLMs and further fine-tuned with a private dataset from a single volunteer referred to as A, without applying any anonymization techniques. They utilized the pretrained model Llama3-70B, combining fine-tuning with QLoRA. It is important to note that they conducted the experiments using prompts without modifying the model's architecture. The model was evaluated in two ways. First, they gathered responses from A, A-clone, other LLMs, and A's family members attempting to mimic A. A Turing-like test was conducted with 31 participants, who had varying degrees of familiarity with A, to determine if they could correctly identify A's genuine answers in a Q&A task. The participants correctly identified A's real responses 55\\% ± 7\\% of the time, which is just above chance. A-clone outperformed all other baselines in replicating A's responses. In the second evaluation, they compared A-clone's answers to A's across 10 tests covering topics such as psychology, morality, career, political views, and general knowledge, consisting of a total of 484 questions. A-clone's answers demonstrated a high level of agreement with A's responses."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I noticed some errors in the presentation of the text, as well as certain aspects of the experimental stage that could have been improved. While the work has a solid foundation, I feel that both the presentation and the development of the experiments were somewhat rushed. Below, I will outline the specific points I identified. I should also mention that the methodology and the creation of the model, called A-clone, do not appear particularly unique to me. The novelty of this work seems to derive primarily from the data, which could potentially impact the model's quality. In the related works section, you mention several papers that, in my opinion, could have influenced and improved your experimental setup. However, it seems that these works were not utilized. It gives the impression that you referenced these papers more for citation purposes than to fully understand them and build upon their ideas.\n\n A) Presentation\n \n A1) While it is clear from reading the text that your data are in English, this should be explicitly mentioned in both the abstract and the introduction.\n \n A2) Typos\n \n A2.1) Since you used the abbreviation LLM for Large Language Models (see line 034), you should consistently use this abbreviation throughout the text. Therefore, please revise the text in lines 081, 092, 113, 159, 427, 488, and others.\n \n A2.2) The same applies to the abbreviation Supervised Fine-tuning (see line 112). Please correct the term on line 125.\n \n A2.3) In several instances, you did not correctly apply spaces, particularly near references. Please review and correct this in lines 083, 088, 089, 098, 099, 101, 102, 204, 209, 248, and so on.\n\n A3) As mentioned in the author guidelines (see https://iclr.cc/Conferences/2025/AuthorGuide), you are encouraged to include a Reproducibility Statement at the end of the main text. This statement should detail the efforts made to ensure reproducibility and include any necessary information for those wishing to replicate your results.\n \n B) Experimental\n \n B1) In the introduction (lines 042-046), you mention that A-clone is fine-tuned exclusively with private personal data from a typical individual, but you do not mention whether you obtained permission to use that data. Additionally, I did not find any statement from this individual indicating their approval for the use of their data.\n \n B2) Admittedly, your experiment is quite interesting, as I have not come across any similar studies in the literature that involve email data collected over a 20-year period from a single user. However, I believe the experiment is limited by the small number of users you selected. While it may be relatively straightforward to clone the characteristics of a single user from an LLM, if your experiment had involved two or more users, it would be harder to guarantee that the approach is effective. In practice, such a tool should be tested on large datasets from various users. This is where an LLM’s capability to distinguish individual characteristics would be most valuable, and generally distinguish if this model overfits the data or really understands the character of a person. \n\n\n B3) Regarding the anonymization of the data, you mention in lines 104-107 that in most cases, the data are anonymized and difficult to trace, which is why you chose not to apply any anonymization technique. However, in my view, anonymization is essential to protect the privacy of the data, especially in your case, where the data span a 20-year period. Failing to implement anonymization could encourage both the academic community and the industry to overlook this crucial aspect of user privacy.\n \n B4) Another point I raised concerns the questions you asked the LLM and compared to the ground truth of person A. It’s important to examine the distribution of topics in these questions. For example, in the paper \"Does GPT-4 pass the Turing test?\" by Cameron R. Jones and Benjamin K. Bergen (2024), which you reference, the authors use a variety of question types, ranging from personal information to general knowledge. This approach helps determine whether the LLM truly understands the personality and behavior of the person or simply overfits to their data. In the same paper, authors gave an option to tester volunteers to explain why they believed a certain response was generated by an AI, which could provide you with further insights into the quality of your model. You mention in lines 137-148 that you use a variety of questions to cover a wide range of topics. Please add a plot that shows the distribution of questions topics to wrong/correct response prediction of tester volunteers. \n\n B5) Related Works Section\n \n B5.1) In general, it seems that many papers can be added since personalized LLMs are a broad topic. For instance, regarding the Turing test, if you refer to the paper 'Cameron R. Jones and Benjamin K. Bergen. Does gpt-4 pass the turing test?, 2024.' you mentioned, there is much richer literature that explains what it entails. I believe it would be beneficial to include a brief paragraph explaining the Turing test and suggesting additional papers for readers who want more detailed information.\n \n B5.2) It would be helpful to include a paragraph in the related works section discussing the application of personalized LLMs. Additionally, you could mention potential applications of your model in the introduction to provide more context for its relevance.\n \n B5.3) To the best of my knowledge, it seems you have omitted some recent papers related to personalized LLMs that would be valuable to include in the related works section. Examples include \"Leveraging LLM Reasoning Enhances Personalized Recommender Systems\" (Tsai et al.), \"How Good are LLMs in Generating Personalized Advertisements?\" (Meguellati et al.), and \"Doing Personal LAPS: LLM-Augmented Dialogue Construction for Personalized Multi-Session Conversational Search\" (Joko et al.).\n \n B5.4) In the introduction, it would be useful to mention the most relevant works related to your paper and clearly state what distinguishes your work from others in the field."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. What is the source of the 701 Turing test questions? Could you provide background information on how these questions were collected?\n2. The distribution of the 28 participants appears unbalanced, particularly concerning the 'Relationship Category.' The proportion of 'family' participants is too low. Additionally, having only 'stranger' and 'academic' categories, aside from 'family,' seems unreasonable.\n3. Could Figure 2’s confusion matrix be presented such that the sum of all cells equals 1?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The topic of this paper is fascinating and important, focusing on whether an LLM can learn a person’s tone, memory, personality, values, and perspective.\n2. The experimental design, which involved 31 participants with varying levels of familiarity with A to distinguish between the responses of A and A-clone, is also good."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates whether an LLM can replicate an individual's language style by learning from that person's language corpus, which is an interesting topic. The authors trained an LLM called \"A-clone\" using a corpus collected from an individual referred to as A. They designed a series of experiments, including Turing-like tests and psychological assessments, to gather responses from both A and A-clone. Thirty-one participants, each with varying degrees of familiarity with A, were asked to differentiate between the responses of A and A-clone. The study concluded that A-clone's responses showed a strong correlation with those of A."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Although the current experimental setup produces interesting results, it is limited in fully supporting the paper's claims. Since the model is trained only on corpus A, it's unclear if the findings apply beyond this single individual. Expanding the experiments to include language data from other individuals and training corresponding LLM clones would strengthen the conclusions. Without this, as the authors note, the results remain preliminary. A broader evaluation would provide stronger evidence for the LLM’s ability to mimic individual language styles more generally."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "It is unclear whether people sending mails to $\\mathcal{A}$ are aware of their data being used for finetuning a language model (which, depending on jurisdiction, can be an issue). Additionally, potential ethical implications of the work are not well-discussed (there is no ethics section) and hence the submission would benefit from an additional ethics review."
},
"flag_for_ethics_review": {
"value": [
"Yes, Privacy, security and safety",
"Yes, Responsible research practice (e.g., human subjects, data release)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Could the authors provide more details on the training data?\n- Could the authors further describe the results on the personality tests?\n- Could the authors give more details about question distributions in the Turing tests?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The problem is interesting and relevant. The idea of mimicking an individual via an LLM that is finetuned on personal data is also a realistic threat. Already, LLM-based applications are heavily used by nefarious actors to trick people (scam mails, telecalls, etc.)\n- The setup of the human study seems sensible and contains a wide variety of sanity checks (whether individuals are aware, questions to test for basic LLM answers such as the LLM giving code)\n- Baselines are overall sensible. However, it is unclear to the reviewer how to interpret the \"followed by the relevant book chapter\" in the GPT-4o baseline. The description before sounded like this is static. However, this sounds like it could be query-specific. If it is not dynamic, it might be interesting to have this as a new baseline that tries only to pull out the most relevant information for the Q/A set or book to answer the respective question (as done in RAGs).\n- Results show that humans have difficulty detecting the real human across both tests (and include relevant standard deviations and overall certainties). The split of the two different types of tests is interesting, and the split by familiarity was generally a useful ablation (see points given below)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies the current capabilities of a LLM to mimic a human individual after it has been finetuned on a set of personal data. In particular, Llama3-70B, finetuned on 38000 Q&A pairs extracted from e-mails, is evaluated on its answers to ~700 questions as well as a wide range of personality tests. The study finds that humans (n=31) with varying degrees of knowledge about the mimicked individual in a (static) Turing test setup have difficulty distinguishing between the LLM and the answers given by the mimicked individual."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The actual data used is somewhat unclear. I understand from a data privacy perspective why information about the used data is limited but 38000 Q&A pairs from e-mails is an insufficient description to understand the data distribution. It would be nice to have at least a length / rough topic distribution to make more sense of the data. Additionally this could help relating the content that the LLM was trained on to the questions asked in the tests.\n- I would not consider 20 years of e-mail data from one individual as a \"small\" dataset (for this particular task). It would make sense to here ablate over how much data is needed for the LLM to \"mimic\" well enough as this significantly impacts the applicability of such a threat.\n- In the reviewer's opinion, this submission would strongly benefit from a dedicated ethics section. In particular, it is unclear whether people sending mails to $\\mathcal{A}$ are aware of their data being used for finetuning a language model (which, depending on jurisdiction, can be an issue). Further, a more detailed discussion of the points at the bottom of page 9 can be found here.\n- The personality test results could be explained/contextualized further. Notably, to me, it seems like that while the human tests (e.g., Fig. 5) strongly prefer $\\mathcal{A}-clone$, o1 with a basic ICL setup previously used for GPT-4o, significantly outperforms $\\mathcal{A}$ across almost all categories - this discrepancy in performance seems unexplained in the work so far. Could this be due to different question setups or human judgment not being directly aligned with questions in such personality tests?\n- As the paper acknowledges, the Turing test is static, and several of the questions are quite general in nature (e.g., \"Does AI represent a major threat to humanity?\"). One thing that would benefit the study here is to cluster questions into more fine-grained groups, e.g., (1) Does the LLM actually perform transfer learning by being able to trick people on questions where no samples have been in the training data /but rather it extrapolated personal traits of the individual) (2) Does it perform better on more subject-oriented questions or more personal questions, etc? This would probably also strengthen the familiarity with $\\mathcal{A}$ ablation, where one would expect to see stronger trends within certain subcategories.\n\n### Typos/Nits\n\n- Some inconsistencies in Llama3-70B spelling, e.g,. bottom of page 1 vs abstract\n- Citations seem to be quite often without space before, e.g., line 87/88\n- Missing \".\" at the end of page 2\n- There are some margin violations, e.g., page 9 and in appendix that should be very easily fixable."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- What criteria did you use to select the open-ended questions for the Turing-like tests?\n- How did you ensure there was no data leakage between the training set and the evaluation questions? Did you use any methods to check for similarity or overlap between the training and evaluation datasets?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper convincingly evaluates A-clone, demonstrating that it responds very similarly to the individual.\n- Real human evaluators used, with a variety of familiarities with A.\n- Good set of baselines: compares A-clone not only to other LLMs (GPT-4o, Llama3-Instruct) but also to responses from the individual's family members."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores creating and evaluating \"A-clone\", a personalized large language model designed to mimic a specific individual's responses. The authors fine-tune Llama3-70B using a private dataset of emails and interviews from the individual. They evaluate A-clone through Turing-like tests with human participants and comparisons with psychological test responses. Evaluators struggle to distinguish between A-clone and the actual individual, outperforming other baselines. Psychological tests show strong correlations between A-clone's responses and the individual's. The paper provides a proof-of-concept for a personalized LLM while highlighting the importance of addressing associated risks and ethical considerations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- A significant existing body of works exists looking at tuning models to emulate particular characters or personas such as Li et al, 2023 (https://arxiv.org/abs/2308.09597) and Zhou et al, 2023 (https://arxiv.org/abs/2311.16832). \n- The fine-tuning techniques used are not interesting or novel. \n- The authors do not attempt fine-tuning other models besides Llama 3 70B.\n- The authors do not describe a process for generating or selecting the evaluations questions that ensures they are distinct from the email and interview data used for training."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024the,\ntitle={The other you in black mirror: first steps from chatbots to personalized {LLM} clones},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=znGnmAM44K},\nnote={under review}\n}"
},
"abstract": {
"value": "Large language models (LLMs) have demonstrated remarkable abilities in a wide\nvariety of generic tasks. Here we investigate whether it is possible to use LLMs\nto partially replicate cognitive aspects of an individual by fine-tuning an LLM\nwith personal data. Our model, A-clone, built on the pretrained Llama3-70B, was\nfine-tuned with a private dataset from one volunteer referred to as A throughout. We\nevaluated A-clone in two ways. First, using 701 open-ended questions, we gathered\nresponses from A, A-clone, other LLMs, and A’s family members imitating A.\nWe conducted a Turing-like test where 31 participants with varying degrees of\nfamiliarity with A attempted to identify A’s real answers in a question-and-answer\ntask. Human participants identified the genuine responses from A 55% ± 7%\nof the time, just over chance levels. A-clone outperformed all other baselines\nin mimicking adequate responses from A. Second, we compared the outputs\nof A-Clone with the ground truth from A in 10 psychological, moral, career,\npolitical tendency, and general knowledge tests, containing 484 questions altogether.\nA-Clone demonstrated a strong correlation with A’s responses. This work provides\nan initial, proof-of-principle, evaluation of the possibility of mimicking the\nresponses of an individual, opening doors to many real-world applications but\nalso raising potential privacy and safety concerns about digital clones. The code\nand data can be found in this link."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Large Language Models (LLMs)",
"Personalized AI",
"Turing Test",
"AI Safety"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/4cbfedd25d542f374d2565184d015f3e79bfe324.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/b1d0ccf992ea78918bb8d3a55488d4d885064812.zip"
},
"title": {
"value": "The other you in black mirror: first steps from chatbots to personalized LLM clones"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
znL549Ymoi | Interpretability of LLM Deception: Universal Motif | main | Active | safety;honesty;deception;lie;interpretability;Large Language Model | alignment, fairness, safety, privacy, and societal considerations | 3;3;6;10 | 5;4;3;5 | 1;3;3;3 | 1;3;3;4 | 1;1;2;4 | 5.5 | 4.25 | 2.5 | 2.75 | 2 | 0.157459 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Can you share more details on the methodology used for patching? \n\nDoes this effect extend to further models of sizes between those tested, or larger than those tested? \n\nHow might these results extend to non-binary lying (ie. where partial truths are told, and the response isn't clearly labelled with true or false)?"
},
"rating": {
"value": 10
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Originality: The model finds a novel rotation effect in latent space of models when prompted on a honesty and lying dataset. Though related work has engaged with Deception in various ways, this type of interpretability using activation steering has seen little precedent in deception research. \n\nQuality: While the paper has a narrow focus, the claims made are generally well substantiated. Perhaps the term 'universal' is a bit confident given this paper tested the effect across four models of different sizes, and it's possible that further models might not display the same effect. \n\nClarity: The figures are exceptionally clear at illustrating the rotation effect and the distinctions between the phases. The math explaining the computation of the activation steering vector is easy to follow. The section explaining how the patching was done could have been a bit more clear on how the patching operation was done. \n\nSignificance: This work presents an interesting starting point for lots of future work investigating the nature of deception in LLMs. If this is indeed a universal motif across all LLMs, this work has found a highly unique pattern which may allow for safety intervention based on activations for models in inference. Due to activation steering's computational lightness, this might be a very tractable intervention for models in deployment. Additionally, follow on work might shed more clarity on which stages and layers are most strongly involved in deception, and how to mitigate risks from Deception more effectively."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates Deception in LLMs and uses activation engineering to steer the model towards truthfulness in different layers. The experiments are specifically focussed on truthfulness and lying behavior with a contrastive steering vector computed. Each factual question in the dataset is prompted to give a true or false answer. The authors find that only larger models can knowingly lie, whereas smaller ones cannot. Further, the authors find that the latent representations of lying go through three distinct phases including separation of honest and lying (where two distinct clusters form), separation of true and false (where with the previous two clusters, two subclusters respectively form) and rotation of truth direction (where the lying cluster reverses direction by inverting the positions of the two subclusters, whereas the truthful cluster further separates out the original subclusters in the original direction). Beyond this, the authors find that intervention with a steering vector to reduce lying is only effective in the third stage layers. It appears that in models that cannot lie the three stages also exist but the rotation does not happen, whereas in models that can lie, the rotation is consistent. Interestingly, this motif is consistent across models of size 1.5B to 70B. This research sets an interesting precent for further research into deception, with the potential to extend this work into further types of deception and to more deeply understand the roots of this phenomenon. Perhaps it suggests opportunities for the development of interventions on the activation level during deployment."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The authors also acknowledge that only a narrow subset of deception, namely binary honesty and lying, is considered. It remains open whether these results would generalize to other forms of deception or dishonesty, such as when the statements are not purely factual or when there might be partial truths or a continuum of honest to dishonesty rather than a binary truth or false response. Would this, for instance, correspond to a partial rotation? Perhaps the authors intentionally focussed on purely factual true or false questions for ease of evaluation. \n\nIt might be further interesting to see whether this effect replicated in more complex deceptive setups. For instance, if an agent needs to be lie to achieve a goal in a multi-turn problem setting, does the same effect appear? It might be interesting to try to analyse whether rotation happens, and if so, at which time point the rotation takes places, and how soon after this the outputs reflect the deception. \n\nIt would be interesting to see more analysis of the differences in the rotation effect between different models. Does it rotate at approximately the same location (proportionately) in layer space in each model? Are the sizes of the stages the same size for each model respectively? (ie. for Llama - 3- 8B there seem to be 4 layers in each stage. Does one of the larger models show this same consistency in size of layers?). Are the layers involved always consecutive? What proportion of a model's layers is involved vs not involved in this effect? \n\nTypo in section 4.2 in \"Lllam-3-8b-chat\"."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Sec 3.2: \n- is the \"protocol\" only composed of those two fixed prompt templates, or are other templates used (for instance, for different models)?\n\nSec 3.3:\n- Is the \"match\" with the ground-truth label done as a simple substring check? If that is the case, \"The answer is not *true\" would be incorrectly marked.\n\nSec 3.4: \n- Truth direction: what do \"true\" and \"false\" refer to? Is it the answer that the LLM produces, or the ground truth?\n- That is an arithmetic mean, not a geometric one.\n\nSec 3.5.1: \n- It is not clear from the formulas there that the prompts are contrastive; in particular, the definitions in Eq 4 do not make clear that the same number of prompts are used.\n\nFig 1:\n- The lie example in Fig 1 is not very \"convincing\", but it is rather as if the model was joking. Are all generated lies across models of this form? If that is the case, then maybe the authors could try methods that generate more convincing lies. If that is not the case, then it would be interesting to investigate if there is any distinction in the activations between convincing and non-convincing lies.\n\nSec 4.3:\n- have the authors got any interpretation for why the \"truth direction\" influences the ability to lie?\n\nSec 4.4: \n- is the patching experiment done on all models? Or only on those that are capable or incapable of lying?\n- if the activations of a complete layer are replaced, does that mean that all downstream activations are equivalent to those that the model would produce if the truthful example was presented to the model?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "## originality\n- the findings on the three stages of activations across layers are original.\n\n## quality\n- The experiments are thoroughly conducted and analyzed.\n\n## clarity\n- The various figures present the main findings very clearly and succinctly.\n\n## significance\n- Understanding how lying and deception emerge in LLMs is an important question; moreover, the paper also suggests that activation steering could neutralise (or at least reduce the prevalence) of deception."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies the internal representation produced by LLMs prompted to lie or respond truthfully to binary questions. In particular, they analyse the activations at all layers of the model and plot the first 2 principal components (PCs) of the activations. The resulting scatterplot can be grouped in three stages: first, the PCs of the activations corresponding to truthful and untruthful scenarios separate; then, within each of the clusters obtained in the first stage, the activations corresponding to true and false answers separate, with the vector going from the centroid of the true to the false cluster (\"truth direction\") being roughly parallel across truthful/untruthful scenarios; finally, the activations for the true and false answers in the untruthful scenarios get swapped, thus leading to antiparallel truth directions. These three stages occur in all models capable of lying in the ones they tested. Next, they perform some experiments to probe the causal nature of the rotation: first, they perform \"patching\", where the activations corresponding to the lying scenarios are patched onto the truthful scenario to see if the model ends up lying or not; they find that a small set of token positions and heads lead to changing model behaviour. Finally, they perform model steering, namely, adding a constant steering vector obtained as the average difference of the activations in the truthful and lying scenarios; they find that only steering the layers from the third stage reduces lying. Therefore, they conclude that these layers are causally responsible for the lying behaviour."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The related works are missing reference to https://arxiv.org/abs/2407.12831, which shares several similarities with the paper (in particular, they also identify a 2-dimensional space which represents lying/truthful state and true/false answer.\n\n- Some parts of the paper are written in a too informal manner for a published work, and seem rather to have been obtained from internal research notes. In particular, I refer to Sections 3.4, 3.5, 3.6, 3.7, 4.2, 4.4.\n\n- Other minor presentation details: Section 3 could introduce what the various subsections should discuss; Fig6B has a \"Loading Mathjax\" box which should be removed.\n\n- The paper claims that it \"introduces a simple yet general protocol to induce large conversational models to knowingly lie\"; however, from the description in 3.2, this protocol seems extremely simplistic (two prompt templates fixed across LLMs), not advancing in any way with respect to previous settings, such as Pacchiardi et al. 2023. As such, I don't think this should be presented as a main contribution, nor as \"careful prompting design\", as it is described in Sec 3.3. Or, does the description of Sec 3.2 overlook important details (see question below)?\n\n- The paper claims that the patterns they find are \"universal\". While interesting that they are coherent across the considered LLMs, the setup they consider is still quite narrow (a single prompting setting, and only ~100 binary questions linked to scientific facts). As such, I believe the use of the term \"universal\" is unsuitable, particularly in Secs 2 and 5. \n\n- Relatedly, the fact that a fairly narrow set of examples is considered makes me wonder how general the patterns of activations (shown in Fig 3) are; in particular, it may as well be that the activations are different for a different set of samples. Or, it may be that the overall patterns are preserved, but that the different clusters are less distinct due to larger variety in the prompts.\n\n\n- More experiments could be done to complement the ones provided and provide additional evidence of the causal nature of the 3-stage patterns identified. For instance: \n\t- To complement the study in Sec 4.5, the authors could try steering honest models to lie.\n\t- Also, steering using the truth direction rather than the honest direction could be done"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. Can you explain why the four subfigures in Figure 4 part A are totally identical?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "Study an important problem of safety.\nDo some good visualization."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "First conduct a simple prompt engineering to induce LLMs from four families (Qwen, Yi, Llama and Gemma) with different sizes to knowingly lie.\nThen use the simple key word matching (\"true\"/\"false\") proportion as the evaluate metric for LLM deception.\nUsing the well-kown interpretability tool, i.e., activation steering, to study the key behind deception behavior and try to reduce deception.\nName 3 stages and some directions, give the finding that if a LLM complete the third stage by rotating the truth direction, it can perform deception knowingly."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper writing is not clear. Introduction part is too short to introduce the motivation and proposed method. Arrange of this paper is also bad. Figure 1 is far away from its corresponding explanations. There are also some typos resulting in difficulty in reading.\n2. The authors do simple prompt engineering to induce lies from various models. They reach the suspectable conclusion: small models ''cannot'' lie. They cannot conclude like this unless exhausitive induction. I highly suspect this conclusion is incorrect if you try another prompt engineering approach.\n3. LLMs' responses can vary a lot with multiple runs even with the same prompt. Results in Figure 2 with the simple metric ''accuracy'' are not reliable in this circumstance. And the authors say nothing about specific temperatures for these LLMs, which further makes the results untrustworthy.\n4. With little innotation, this paper just uses a famous interpretability tool to study the deception scenario, presenting some findings seems not robust or reliable."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "N/A"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The study examines a diverse range of language models across various sizes and model families.\n2. The research findings are intriguing, particularly the observation of truth direction rotation in the third refinement stage for models capable of lying.\n3. The verification of findings, especially the conclusion that the third stage is causally linked to lying, appears robust and well-supported."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates several large language models (LLMs) to understand when and why these models may exhibit deceptive behavior. The authors first find that the tendency to lie increases with model size. They then explore how latent representations associated with lying evolve through three iterative refinement stages, concluding that smaller models lack the ability to lie as they cannot rotate truth directions during the third stage. The study further examines if this third stage is causally linked to lying, with findings that suggest a causal relationship."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The protocol used to instruct models to lie intentionally is overly simplistic. As a result, the conclusion that lying scales with model size may be somewhat limited. It would be valuable to explore whether smaller models could also exhibit deceptive behaviors if prompted with more sophisticated, carefully engineered instructions.\n\n2. While it is interesting to identifying that the third refinement stage in the rotation of truth direction is causally linked to a model's ability to deceive, it would add depth to the study to investigate why smaller models fail to achieve this directional rotation.\n\n3. The paper's readability, particularly in Section 4.4 and Figure 5, could be significantly improved. It is challenging to understand how the results presented in Figure 5, along with the text in the second paragraph of Section 4.4, effectively demonstrate the causal relationship between the third refinement stage and lying behavior."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We use interpretability\\transparency tools to understand and control deception in a wide range of large conversational models."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024interpretability,\ntitle={Interpretability of {LLM} Deception: Universal Motif},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=znL549Ymoi},\nnote={under review}\n}"
},
"abstract": {
"value": "Conversational large language models (LLMs) are trained to be helpful, honest and harmless (HHH) and yet they remain susceptible to hallucinations, misinformation and are capable of deception. A promising avenue for safeguarding against these behaviors is to gain a deeper understanding of their inner workings. Here we ask: what could interpretability tell us about deception and can it help to control it? First, we introduce a simple and yet general protocol to induce 20 large conversational models from different model families (Llama, Gemma, Yi and Qwen) of various sizes (from 1.5B to 70B) to knowingly lie. Second, we characterize three iterative refinement stages of deception from the latent space representation. Third, we demonstrate that these stages are \\textit{universal} across models from different families and sizes. We find that the third stage progression reliably predicts whether a certain model is capable of deception. Furthermore, our patching results reveal that a surprisingly sparse set of layers and attention heads are causally responsible for lying. Importantly, consistent across all models tested, this sparse set of layers and attention heads are part of the third iterative refinement process. When contrastive activation steering is applied to control model output, only steering these layers from the third stage could effectively reduce lying. Overall, these findings identify a universal motif across deceptive models and provide actionable insights for developing general and robust safeguards against deceptive AI. The code, dataset, visualizations, and an interactive demo notebook are available at \\url{https://github.com/safellm-2024/llm_deception}."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"safety",
"honesty",
"deception",
"lie",
"interpretability",
"Large Language Model"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/87404ee6fa62d182d4038df7ebec28c88ed083b5.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Interpretability of LLM Deception: Universal Motif"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
znhZbonEoe | Understanding the Stability-based Generalization of Personalized Federated Learning | main | Active | stability analysis+generalization gap+excess risk+personalized federated learning | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 3;5;6;8 | 4;4;4;4 | 2;2;3;4 | 2;2;3;3 | 1;2;3;3 | 5.5 | 4 | 2.75 | 2.5 | 2.25 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Why does not the bound Theorem 1 match the centralized SGD if $K=1$ without personalization? Is it due to technical analysis? Could the authors solve such technical issue?\n\n2. What are the main technical difficulties in the analysis, comparing to existing literature?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper provides generalization bounds leveraging uniform stability of the algorithm for PFL under both centralized and decentralized settings. The bounds are topology-related, which provides new insight in how graph structures affect the generalization."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies the generalization gaps of personalized federated learning (PFL) under centralized and decentralized cases. Uniform stability tools are used to derive generalization upper bounds that reflect the influences of graph topologies."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The main weakness of the paper is that the bounds do not recover the centralized training with SGD. To be specific, PFL reduces to centralized SGD when $K=1$ and $v_i$ is constant $\\forall i \\in [m]$. However, the proposed bounds in Theorem 1 indicate a worse performance than SGD (Hardt et al., 2016). \n\n2. The analysis of the paper is standard as literature, which means there is limited technical contribution of this paper.\n\n3. Shown by (Sun, Niu, Wei, 2024), the generalization of FL is affected by different data heterogeneity levels, while this paper does not capture such phenomenon.\n\nI am willing to raise the score if my concerns are addressed.\n\nReference:\n\nSun, Z., Niu, X., & Wei, E. (2024, April). Understanding generalization of federated learning via stability: Heterogeneity matters. In International Conference on Artificial Intelligence and Statistics (pp. 676-684). PMLR."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Please address the questions in Weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "This paper considers an important problem of trying to quantify the generalization properties of personalized FL. An important by-product of such an analysis is quantifying the optimal number of communication rounds that minimizes the overall error (= optimization error + generalization error); papers on convergence guarantees do not consider the generalization error."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper derives uniform-stability-based generalization bounds (as defined in [1]) for personalized federated learning (FL). Bounds are derived in both the centralized setting (where there is a central server) and the decentralized setting. By combining the derived bounds with existing bounds on the optimization error, the paper provides an expression for the number of communication rounds which they claim is the optimal number of rounds to minimize the total (= optimization + generalization) error. Some experiments are performed to validate some of the theoretical insights.\n\n[1]: Moritz Hardt, Ben Recht, and Yoram Singer. Train faster, generalize better: Stability of stochastic gradient descent. In *International conference on machine learning*, pp. 1225–1234. PMLR, 2016."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Technical issues:**\n\n**1.** In Remark 4/Corollary 1 and Remark 8/Corollary 2, where the authors are deriving the \"optimal\" number of communication rounds $T^{*}$, how do the authors know that the derived generalization bounds are order-optimal w.r.t. $T$, $K$ and other important quantities. If the tightness of the derived generalization bounds is not shown, then it is unfair to call it \"optimal\" number of communication rounds. Moreover, in Corollary 1 and 2, the dependence on other problem-specific parameters (such as smoothness constants, stochastic gradient variance, etc.) shouldn't be ignored if they are being used to derive the \"optimal\" number of communication rounds.\n\n**2.** Can the authors please summarize the technical challenges compared to the seminal analysis in Hardt et al., (2016) for SGD? Is there any particular challenge due to local updates in the clients? \n\n**3.** Overall I'm not too impressed/surprised by the derived results -- a more satisfactory result for me would have been something that shows the generalization benefits of personalization compared to *no* personalization. \n\n**Presentation issues:** \n\n**1.** The presentation of Theorem 1 and Theorem 2 is rather poor and complicated. What is the meaning of \"*They decay per iteration $\\tau = t K + k$,...*\"? And the equations below have $\\tau_0$ instead of $\\tau$. I'd have directly presented eq. (8) and (10) with the optimal value of $\\tau_0$ rather than presenting the intermediate results (eq. (7) and (9)). And what is $\\kappa_\\lambda$ in the context of D-PFL? Also, it is very hard to parse Table 1.\n\n**2.** Definition 2 is not clear to me -- what is the index $j$ (subscript of $z$)? In eq. (4), a bracket is missing at the end in $F(\\mathcal{A}(S)$ and $f(\\mathcal{A}(S)$. In Assumption 2, what is $F_i$? Only $f_i$ was introduced earlier. \n\n**3.** Paper writing needs to be improved. For instance, I'd suggest using \"algorithm-dependent\" instead of \"algorithm-matter\". Also, it should be \"influential\" in Remarks 4, 5 and 8."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. In line 803, could you clarify how the inequality is derived? Specifically, how is $\\Vert f(w;z) - f(\\tilde{w};z)\\Vert^2$ bounded given that $U=sup_{w, z}f(w;z)$.\n2. In line 806, it appears there should be an inequality rather than an equality, since $\\Vert a + b \\Vert \\leq \\Vert a \\Vert + \\Vert b\\Vert$.\n3. The notation $I$ requires further explanation, as it is unclear how to derive the bound for $P(\\{\\xi^c\\})$;\n4. Since the authors discuss generalization stability, I recommend comparing the results with similar studies focused on generalization stability rather than generalization bounds as shown in Table 1."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper presents a well-organized structure and provides a clear comparison between its results and prior findings. Additionally, it offers an in-depth discussion on the impact of hyperparameters on generalization performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper establishes a generalization stability analysis for a widely-used personalized algorithm, partial federated learning (PFL), in both centralized and decentralized federated learning (FL) settings. The authors also discuss the relationship between the derived stability results and previously established generalization stability upper bounds, and provide insights into how these findings can inform and guide the training process."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Inconsistent Notation: The notation used throughout the paper lacks alignment, which hinders readability. For instance, in Algorithm 2, the aggregation step update is defined as $u^{t+1}=\\frac{1}{n}\\sum u_i^{t+1}$, but in line 866, the update formula appears as ${\\frac{1}{n}}\\sum_{i\\in \\mathcal{N}}u_{i, K_u}^t$. It is unclear what $\\mathcal{N}$ represents in this context. Additionally, in line 812, the notation $I$ is introduced without definition. This inconsistency in notation complicates understanding and interpretation.\n2. Unclear Experimental Validation of the Proposed Theorem: It is difficult to discern how the experimental results support the theoretical findings. The authors establish generalization stability, yet in the experiments, they only measure the gap between training accuracy and test accuracy. There is no clear implementation of perturbations as discussed in the theoretical framework, making it unclear how the experiments substantiate the proposed stability theorem."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "see the weakness part."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The major contribution of this paper is from the theoretical aspect in generalization.\n\n1. Generalization in (personalized) federated learning is critically important yet understudied, largely due to its complexity compared to the extensive body of work on convergence analysis in FL.\n\n2. This paper offers a rigorous analysis, with well-rounded discussions and comparisons that cover various specific cases.\n\n2. By focusing on the “algorithm-matter” generalization through uniform stability, the authors provide a practical framework that incorporates the effects of stepsize, learning steps, and communication structure (C-PFL vs. D-PFL). This analysis fills a significant gap in the field by moving beyond static theoretical assumptions.\n\n3. The detailed examination of hyperparameters like learning steps and stepsizes, along with communication modes, is valuable for both theoretical insights and practical implications. The finding that C-PFL generalizes better than D-PFL is intriguing and has practical relevance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper addresses an important theoretical gap in Personalized Federated Learning (PFL) by focusing on generalization performance beyond convex conditions, specifically analyzing non-convex settings. The authors introduce a generalization analysis that incorporates algorithm-dependent factors through uniform stability. The paper investigates the effect of hyperparameters, learning rates, and communication modes (Centralized vs. Decentralized PFL) on generalization performance. Additionally, they provide excess risk bounds and propose early stopping criteria for optimal risk in PFL, supported by experiments on the CIFAR dataset."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I don’t see any major weaknesses in the paper. However, I suggest: 1) incorporating additional datasets and models in the experiments to strengthen the findings; and 2) clarifying the notation throughout, e.g., 'U' in the theorem."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024understanding,\ntitle={Understanding the Stability-based Generalization of Personalized Federated Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=znhZbonEoe},\nnote={under review}\n}"
},
"abstract": {
"value": "Despite great achievements in algorithm design for Personalized Federated Learning (PFL), research on the theoretical analysis of generalization is still in its early stages. Some recent theoretical results have investigated the generalization performance of personalized models under the problem setting and hypothesis in the convex condition, which do not consider the real iteration performance during the non-convex training. To further understand the testing performance from the theoretical perspective, we propose the first algorithm-matter generalization analysis with uniform stability for the typical PFL method Partial Model Personalization on smooth and non-convex objectives. In an attempt to distinguish the shared and personalized errors, we decouple the shared aggregation and the local fine-tuning progress and illustrate the interaction mechanism between the shared and personalized variables. The algorithm-matter generalization bounds analyze the impact of the trivial hyperparameters like learning steps and stepsizes as well as the communication modes in both Centralized and Decentralized PFL (C-PFL and D-PFL), which also concludes that C-PFL generalizes better than D-PFL. Combined with the convergence errors, we then obtain the excess risk analysis and establish the better early stopping point for the optimal population risk of PFL. Promising experiments on CIFAR dataset also corroborate our theoretical results."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"stability analysis+generalization gap+excess risk+personalized federated learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/e6d5a0f43fb4a791c1ed3929c711e5700f5225f1.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Understanding the Stability-based Generalization of Personalized Federated Learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zno7tZVG8T | Extreme composite compression of large language models through joint optimization | main | Active | model quantization;model compression;sparsification;joint optimization | generative models | 3;3;5;6 | 4;4;4;3 | 3;2;3;3 | 2;1;2;3 | 3;2;3;2 | 4.25 | 3.75 | 2.75 | 2 | 2.5 | -0.777778 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the Weaknesses section."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper appropriately points out the sequential application of quantization and sparsification results in sub-optimal results.\n\n2. Experiments show that joint optimization can better recover the model accuracy compared to other approaches."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "There are two well-known approaches towards post-training LLM compression: quantization and sparsification. One can apply both techniques at the same time to compress the LLM. Traditionally, these techniques are applied sequentially, which can lead to significant accuracy losses due to compounding errors. The authors assert that simultaneous optimization of both quantization and sparsification errors can enhance performance by mitigating alignment issues that arise in sequential processes. They propose a learnable transformation matrix and a reordering method within the sparsification process to improve weight selection stability. The proposed method is tested across various LLM backbones and compression configurations, demonstrating superior benchmark accuracy and efficiency compared to sequential methods, especially under high-compression scenarios with low-bit quantization and high sparsity rates."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The method only assessed in terms of easy benchmarks such as common sense reasoning. Comparisons on important benchmarks such as MMLU, GPQA, GSM8K, etc. would benefit the paper.\n\n2. For unstructured sparsity, the weight masks must also be stored and utilized at inference time. This results in additional storage and latency overhead compared to dense quantization. In this sense, a trade-off analysis regarding memory usage and accuracy is needed."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- What are the exact operations and steps for using the importance weights computed in Equation 9 to reorder quantized weights?\n- In line 316, why is the sparsity ratio $\\beta$ written as $5e^0$ instead of simply $5$? Is this a typo?\n- Could using more advanced quantization and sparsification methods yield better results?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- **Integrated Strategy**: The proposed training strategy effectively links quantization and sparsification, aiming to improve on previous sequential approaches.\n- **Broad Compatibility**: The approach is compatible with various quantization and sparsification techniques, making it adaptable across different LLM architectures."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a joint optimization strategy for compressing large language models (LLMs) through post-training quantization and sparsification, integrating both processes to minimize errors simultaneously. Experiments demonstrate performance improvements on LLaMA and OPT models over sequential approach."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- **Limited Novelty**: The compression strategy primarily combines existing techniques (i.e., AffineQuant and DSnoT) to compress LLMs without introducing something novel and substantial.\n- **Unclear Motivation for Reordering**: The purpose and intuition behind the reordering mechanism are not clearly explained, especially regarding the design of the importance metric in Equation 9. Details on how reordering is applied during training are also missing.\n- **Marginal Improvement over Sequential Approaches**: The performance gains over sequential methods are limited, and the ablation study raises questions about the joint optimization’s effectiveness.\n\t- Results in Tables 3 and 4 indicate that the reordering mechanism is critical for the success of joint optimization, which is not sufficiently emphasized. For instance, in the LLaMA2-13B model, \"Joint without reorder\" (PPL 411.18) performs significantly worse than the baseline “Sequential-Wanda” (PPL 13.56), suggesting that joint optimization alone may not avoid local optima as claimed. This inconsistency challenges the rationale of the method.\n- **Lack of Clarity and Visual Presentation**: The paper’s layout is inconsistent, with texts in Figure 3 and Table 6 much larger than in other tables and figures. Additionally, there is no visual demonstration of the proposed method to aid understanding.\n- **Limited Task Diversity**: Evaluation is limited to standard zero-shot NLP tasks. Testing on a wider range of tasks and datasets would provide a stronger case for general applicability."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- How do these methods perform with lower sparsity levels? Let's say for 5%, 10%, 15% and 20% sparsity. From a practicality point of view, this seems to be the more ideal setting, since the results in the paper for Quantization + Sparsity are very far from the bf16 numbers in most settings.\n- Have the authors tried integrating other post-training quantization methods such as GPTQ, or QuIP [1] and FrameQuant [2] for int2 quantization?\n- For the same number of parameters, is quantization + sparsity better than only quantization or only sparsity?\n\nChee, Jerry et al. “QuIP: 2-Bit Quantization of Large Language Models With Guarantees.” NeurIPS 2023.\n\nAdepu, Harshavardhan et al. “FrameQuant: Flexible Low-Bit Quantization for Transformers.” ICML 2024."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The proposed methods gives substantial gains over sequential optimization for int2 and int3 quantization with 50%-75% sparsity."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a method to jointly optimize for quantization and sparsity, unlike previous methods that alternate between optimizing the quantization loss and the sparsity loss. Joint optimization improves substantially over alternating optimization, across bit-widths and sparsity levels."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper does not provide results for latency. The authors must give latency numbers all the settings presented in the paper. This is to get a better understanding of latency vs quality tradeoff.\n- Although the results presented in the paper are impressive, the paper does not add significant contributions on top of existing works."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Relevant questions and suggestions are given below\n(1) What is the calculation of $||.||^2_{F}$ in equations 4,5, and 6? It is advisable to give clarification in the paper.\n(2) How the inference speed or throughput of the model is affected after quantization and sparsification on a specific GPU?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This paper has the following strengths:\n(1) The paper presents a novel joint optimization strategy for quantization and sparsification of large language models. This approach is innovative as it addresses the issue of error amplification in traditional methods where quantization precedes sparsification.\n(2) The research demonstrates high quality through its comprehensive experimental design and rigorous evaluation.\n(3) This work uses a dynamic reordering method to enhance the effectiveness of learnable masks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a novel approach to the joint optimization of quantization and sparsity in large language models (LLMs).\nThe paper's main contribution lies in proposing a new joint optimization strategy that can simultaneously minimize errors from both quantization and sparsity, particularly suitable for compressing large language models in low-bit and high-sparsity configurations. Additionally, the paper introduces a dynamic reordering method to enhance the effectiveness of learnable masks, bringing significant performance improvements to the field of model compression."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "This paper has two shortcomings, as follows:\n(1)The paper could benefit from a stronger theoretical grounding of why joint optimization works better than sequential approaches. \n(2)The paper lacks a clear explanation of some of the conclusions and mathematical formulas. \nFor example, line 78 describes \"Our experiments indicate that initiating with quantization optimization followed by the\napplication of weight sparsity amplifies the quantization errors, consequently increasing the overall\nmean squared error loss\", but there is no corresponding experimental result to support this conclusion. “"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024extreme,\ntitle={Extreme composite compression of large language models through joint optimization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zno7tZVG8T},\nnote={under review}\n}"
},
"abstract": {
"value": "Post-Training Quantization (PTQ) and Sparsification (PTS) are dominant methods in the compression of Large Language Models (LLMs) due to their minimal resource usage and generalizability. It is a natural idea to integrate quantization and sparsification in a unified framework, which however, often results in substantial accuracy losses. Here we argue that, the key lies in optimization. This paper introduces a novel joint optimization strategy that concurrently mitigates errors induced by both sparsification and quantization. \nUnlike sequential approaches, our method employs learnable transformation matrices to simultaneously optimize errors across both dimensions, preventing the typical misalignments associated with sequential optimizations. Furthermore, we present a reordering mechanism within the learnable mask sparsification process to maintain consistent sparsity ratios. This mechanism ensures the prioritization of the least important weights during each update iteration, thus enhancing the stability of the compression process. \nOur approach demonstrates considerable performance enhancements across diverse models and datasets, with the most notable gains observed under conditions of extremely low-bit quantization and high sparsity ratios. For example, in the LLaMA2-13b model with weight quantization at 2 bit and a 75% sparsity configuration, our method surpasses the state-of-the-art (SOTA) by 9.03% in average accuracy across five zero-shot tasks. Meanwhile, in the newest LLaMA3-8b model, with weight quantization at 3 bit and a 50% sparsity configuration, our method outperforms the SOTA by 4.58% (56.86% vs 52.28%) in zero-shot tasks and achieves a perplexity reduction of 4.45 on the WikiText2 dataset (10.78 vs 15.23)."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"model quantization",
"model compression",
"sparsification",
"joint optimization"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/20ad9fb7c09b87ffad321da89819a1d83e6142e5.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/17229cec036fbe3c23e7b265f2bf37fd963d104f.zip"
},
"title": {
"value": "Extreme composite compression of large language models through joint optimization"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zo049dh2r9 | 3DRealCar: An In-the-wild RGB-D Car Dataset with 360-degree Views | main | Active | 3D reconstruction;Car reconstruction;Car dataset | datasets and benchmarks | 5;5;6 | 4;5;4 | 3;2;4 | 2;1;3 | 3;3;3 | 5.333333 | 4.333333 | 3 | 2 | 3 | -0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- [Q1] Is the proposed dataset restricted to academia or is commercial use\n allowed?\n- [Q2] I looked at the sample car reconstruction provided on the 3DRealCar\n dataset, and noticed that the COLMAP output mesh, =textured_output.obj=, is\n missing the upper half of the car. Is this expected?\n- [Q3] Are any of the vehicles in the dataset captured in more than one\n environment? It is valuable to have the same vehicle captured in multiple\n environments, with varying lighting conditions, because it creates ground\n truth for relighting. Relighting is challenging and it is otherwise very\n difficult to get non-synthetic ground truth, so having some of the cars\n captured in 2+ lighting conditions would be valuable. However, I cannot tell\n whether this is the case in the proposed dataset.\n- [Q3] Will the dataset also include pre-computed dense reconstructions like\n splats, or dense meshes extracted from a NeRF-like method? As mentioned above, it\n seems like the COLMAP mesh outputs can be incomplete.\n- [Q4] On L346, \"2D Car Parsing\", why are car parsing maps given as input to\n segmentation? Or does \"S\" refer to just a binary mask of the vehicle? (Vehicle\n vs. background?)"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- [S1] The dataset includes diverse car types, including sedans, sports cars,\n and small trucks, captured in high resolution, and annotated with part\n information (e.g., doors, hood, wheels). This has applications in benchmarking\n 3D reconstruction and in developing simulators for autonomous driving.\n- [S2] The experimental section analyses multiple tasks such as neural rendering\n and 3D generative modeling, as well as camera-based perception in corner-case\n scenarios. In the latter example, the authors demonstrate that synthesizing\n corner-case images (e.g., cars swerving off road) using their proposed 3D car\n assets can help improve real data perception on these scenarios.\n- [S3] The dataset is already openly available online!"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a new dataset of 2.5k real cars scanned using modern\niPhones, resulting in high-resolution images as well as sparse LiDAR point\nclouds (~200 views per car). This setting provides additional training and\nbenchmarking opportunities in novel view synthesis (NVS).\n\nKey tasks which can be researched and benchmarked with this dataset include 3D\nreconstruction, relighting, and parsing of car parts.\n\nThe dataset is further motivated with experiments demonstrating that using its\n3D cars to perform image data augmentation can lead to perception improvements\nin unusual scenarios. The dataset is also shown to help with part segmentation,\n3D generative modeling, and 3D neural rendering.\n\nOne of my main questions regarding the paper, as elaborated in the Questions\nSection, is whether any car is captured in more than one lighting condition. I\nthink this is a major research gap, so the presence of this data is pivotal in\nassessing the impact of this dataset."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- [W1] The LiDAR used is the iPhone 14 model, which is sparser than typical\n automotive LiDARs. While denser LiDAR can of course be re-simulated from dense\n 3D models, this will require additional engineering effort while also\n suffering from some domain gap. This should be clarified in the intro: \"3D\n scanner\" makes me think of automotive or survey-grade LiDAR, not smartphone.\n- [W2] Some minor suggestions for additional references and discussions: Please\n consider mentioning DeepMANTA [0] as related work since, while old, it also\n proposed similar part-level annotations.\n- This isn't a weakness but as a tip: It can be helpful to provide some usage\n examples and tutorials for the dataset next to its SDK. For example \"how to\n run gaussian splatting on 3DRealCar\", since this can help more people learn to\n be familiar with the dataset, which can increase its impact.\n- Other small suggestions:\n - For the \"Stable Zero123\" reference please surround the author name in curly\n braces in the .bib file, so the citation shows up as \"Stability AI, 2023\"\n instead of just \"AI, 2023\".\n - For those curious (such as myself!) it would be helpful to add some\n low-level details on the data collection in the appendix. For example, I\n have no idea how to get raw LiDAR points from an iPhone, so a brief\n discussion would be interesting.\n - L325: \"the car is well-lighting\" -> \"the car is well-lit\"\n - L417: \"Dreamcract3D\" contains a typo\n- References:\n - [0]: Chabot, Florian, et al. \"Deep manta: A coarse-to-fine many-task network\n for joint 2d and 3d vehicle analysis from monocular image.\" Proceedings of\n the IEEE conference on computer vision and pattern recognition. 2017."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weakness."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The structure of the paper is well-organized and the content is clearly presented;\n2. The effectiveness of the dataset is demonstrated through its application to various downstream tasks, including various 2D and 3D tasks.\n3. This paper introduces a novel 360-degree real car dataset. Previous literature has not extensively covered. Both the quality and the diversity of the dataset are commendable."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a comprehensive 3D car dataset featuring high-quality 360-degree scans. The dataset comprises 2,500 cars, each scanned with 3D scanners to produce 200 high-resolution RGB-D views. Additionally, the dataset includes variations across three different lighting conditions: reflective, standard, and dark.\n\nThe authors detail the data collection methodology extensively. Data capture involved the use of RGB-D sensors, followed by the recovery of camera poses using Colmap Structure from Motion (SfM), and mask extraction utilizing Grounding-DINO. The 3D models were then reconstructed with 3DGS.\n\nThe paper discusses extensive 2D and 3D downstream experiments conducted using the dataset, which include 2D detection and segmentation, depth estimation, point cloud completion, and 3D car generation. The experiment results demostrate the effectiveness of the collected data on various tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The data collection and processing method is not novel and is a standard approach to reconstructing 3D assets.\n2. The paper does not highlight its uniqueness and irreplaceability, especially in terms of improving 2D parsing and 2D detection performance.\n3. The results of the NVS task (Fig.8 and 9) are considerably inferior to the state of the art.\n4. Many previous papers, including CADSim (https://arxiv.org/pdf/2311.01447), GeoSim (Chen et al., CVPR'21), and other related works, have demonstrated the capability to reconstruct cars from real-captured data, which I believe is a more extensible way to reconstruct car assets. The authors claim that their data is of higher quality, but the paper does not demonstrate the necessity of such high quality, especially for downstream tasks.\n\nAlthough the paper provides a very useful dataset and makes a significant contribution, the downstream tasks and data collection processes are not novel and do not meet the acceptance standards of ICLR."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Data of the same car under different lighting conditions could be added to enhance the analysis of lighting effects. Additionally, including metadata such as environmental maps or more detailed car materials could improve the dataset's versatility."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper addresses the limitations of current car datasets by capturing real-world car data with diverse samples, high-quality images, and point clouds.\n2. The idea of introducing 3 different lighting conditions is interesting and makes up for the shortcomings of existing datasets\n3. The extensive experiments demonstrate the effectiveness of this dataset on various tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes 3DRealCar, a large-scale 3D real car dataset with RGB-D images and point clouds of 2,500 cars captured in real-world environments. It addresses the limitations of existing car datasets that usually use synthetic data or have low-quality data. This dataset includes cars under three lighting conditions (standard, reflective, and dark), promoting research in 3D car reconstruction, parsing, and novel view synthesis. Benchmarks with state-of-the-art methods demonstrate that existing methods struggle in reflective and dark lighting conditions, emphasizing the dataset’s value for improving 3D reconstruction methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Though collecting data is costly, the contribution of this work is only on the dataset and lacks technical contribution for 3D/2D car understanding. \n2. This dataset captures cars under three lighting conditions, including reflective, standard, and dark; however, it seems to lack data of the same car under all three conditions, which limits the exploration of the effects of lighting on car appearance."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024drealcar,\ntitle={3{DR}ealCar: An In-the-wild {RGB}-D Car Dataset with 360-degree Views},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zo049dh2r9},\nnote={under review}\n}"
},
"abstract": {
"value": "3D cars are commonly used in self-driving systems, virtual/augmented reality, and games. However, existing 3D car datasets are either synthetic or low-quality, presenting a significant gap toward the high-quality real-world 3D car datasets and limiting their applications in practical scenarios. In this paper, we propose the first large-scale 3D real car dataset, termed 3DRealCar, offering three distinctive features. (1) \\textbf{High-Volume}: 2,500 cars are meticulously scanned by 3D scanners, obtaining car images and point clouds with real-world dimensions; (2) \\textbf{High-Quality}: Each car is captured in an average of 200 dense, high-resolution 360-degree RGB-D views, enabling high-fidelity 3D reconstruction; (3) \\textbf{High-Diversity}: The dataset contains various cars from over 100 brands, collected under three distinct lighting conditions, including reflective, standard, and dark. Additionally, we offer detailed car parsing maps for each instance to promote research in car parsing tasks. Moreover, we remove background point clouds and standardize the car orientation to a unified axis for the reconstruction only on cars without background and controllable rendering. We benchmark 3D reconstruction results with state-of-the-art methods across each lighting condition in 3DRealCar. Extensive experiments demonstrate that the standard lighting condition part of 3DRealCar can be used to produce a large number of high-quality 3D cars, improving various 2D and 3D tasks related to cars. Notably, our dataset brings insight into the fact that recent 3D reconstruction methods face challenges in reconstructing high-quality 3D cars under reflective and dark lighting conditions. \n\\textcolor{red}{\\href{https://3drealcar.github.io/}{Our dataset is available here.}}"
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"3D reconstruction",
"Car reconstruction",
"Car dataset"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/69c14187f86990933a0c9709860bbce38b21517b.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/9efc0ae6c3c591caf196e83497c17a4dc5cc06fb.zip"
},
"title": {
"value": "3DRealCar: An In-the-wild RGB-D Car Dataset with 360-degree Views"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zp88xOXAfS | Linearly Interpretable Concept Embedding Model for Text Classification | main | Active | CBM;XAI;Interpretable AI | interpretability and explainable AI | 3;5;5;5;6 | 2;4;4;5;4 | 2;2;3;2;3 | 2;2;3;2;2 | 2;3;3;3;3 | 4.8 | 3.8 | 2.4 | 2.2 | 2.8 | 0.791667 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Compared to CBMs, how the model deal with high dimensional concept spaces?\n- What is the motivation behind fixing the temperature of Self generated LICEM as 0?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- LICEM offers a transparent, linearly interpretable structure, allowing users to understand how different concepts contribute to predictions.\n\n- The model maintains accuracy comparable to black-box models while enhancing interpretability.\n\n- A Self supervised LICEM variant reduces dependency on annotated concepts, enabling deployment in cases where annotations are unavailable."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents LICEM model for text classification that addresses the interpretability challenges in LLMs, which often lack transparency in decision making process. Unlike existing Concept-Bottleneck Models (CBMs), LICEM provides task-relevant explanations based on human-understandable concepts without sacrificing classification accuracy. This is achieved by utilizing a linear equation for predictions over concept embeddings, improving interpretability and enabling plausible explanations. A self-generative variant Self-LICEM is also proposed to removes the need for manual concept annotations by leveraging LLMs to predict concepts directly."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- LICEM is evaluated mainly on text classification; I'm wonderingif it could generlize to other domains. the assumption that concepts linearly relate to the output may not hold in complex tasks.\n\n- It is not clear if self-LICEM will inherit biases or hallocination from the used LLMs and how to deal with that. As it relies on prompt-responses from LLMs for concept prediction, this can introduce variability to the specific prompts used. \n\n- the concept embeddings may capture task-related signals from text classification. This could be seen as concept leakage too, which may reducing interpretability."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See the weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. LICEM advances interpretability in LLMs by using concept embeddings within a linear framework. This design choice enhances the clarity of the model’s decision-making process, enabling users to understand which concepts influence predictions directly.\n\n2. LICEM achieves high accuracy levels that match or exceed those of existing black-box models. By bridging the gap between accuracy and interpretability, LICEM offers a valuable improvement over traditional concept-bottleneck models, which often struggle with reduced classification performance.\n\n3. The inclusion of a user study that evaluates the plausibility and usefulness of LICEM's explanations strengthens the claim of improved interpretability."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces the Linearly Interpretable Concept Embedding Model (LICEM) for text classification, designed to improve interpretability without sacrificing classification accuracy. Traditional explainability methods in large language models (LLMs) often rely on post-hoc approaches like attention and gradient analysis, which have been found to provide limited insights. Although Concept-Bottleneck Models (CBMs) have been proposed for interpretable predictions, they suffer from limitations in accuracy, task interpretability, and the requirement of extensive annotations. LICEM addresses these issues by offering a linearly interpretable model that makes predictions based on concept embeddings, enabling high accuracy and interpretability without extensive concept labeling. Experimental results and a user study demonstrate that LICEM outperforms existing interpretable models and achieves similar or better performance than black-box models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. In this work, the authors mainly consider text classification. However, although the provided interpretations could help researchers understand the reasoning process during text classification, more complex tasks like natural language inference could be more suitable for evaluation. This is because generation tasks are more prevalent and crucial in the evaluation of LLMs.\n\n2. The use of instance-level linear equations for interpretability might introduce scalability challenges with large datasets or complex models. An analysis of LICEM’s scalability and computational efficiency would provide clearer insights into its practical feasibility for large-scale applications.\n\n3. This work cannot be applied to black-box LLMs, which are recently more powerful than smaller LMs that are white-box. The authors should consider an alternative strategy that is suitable for also black-box LLMs."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The paper lacks a detailed analysis of LICEM’s efficiency. What is the computational cost associated with concept self-generation? what is the dimensionality of the concept embeddings, and how does it affect overall performance?\n2. What is the difference between LICEM and CEM with a linear layer?\n3. Could the authors provide details on the number of concepts generated for each task when using generative and self-generative approaches? This would help clarify scalability and feasibility across different tasks.\n4. How does LICEM handle previously unseen but important concepts that may appear at test time? Addressing this would strengthen LICEM’s applicability to real-world settings."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper improves over existing CEM by providing a more human-understandable linear combination of concept embeddings.\n2. LICEM achieves comparable results over black-box models on 4 text classification datasets, without requiring pre-annotated concepts."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a Linearly Interpretable Concept Embedding Model (LICEM) aimed at improving interpretability in text classification while maintaining high accuracy. Existing Concept-Bottleneck Models (CBMs) which often require manual concept annotations and face limitations in interpretability with non-linear predictors, LICEM uses a linear equation to classify text to provide better interpretability, and use a LLM to generate concepts. Experimental results suggest LICEM achieves comparable performance with black-box models on several text classification datasets while improving explainability."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. LICEM was tested on only four text classification tasks. As LICEM does not require manual concept annotation, evaluating it on more diverse datasets from general domains, such as the GLUE benchmark or 20 Newsgroups, would help verify the claim that LICEM can match the performance of black-box models for short text classification while offering better interpretability.\n2. While LICEM’s use of linearly interpretable concept embeddings is valuable, this approach has been explored in previous works, such as in Interpreting Embedding Spaces by Conceptualization."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "The paper includes a user study that may expose participants to potentially harmful content, such as datasets related to depression or drugs. Therefore, an ethics statement is needed to address these concerns."
},
"flag_for_ethics_review": {
"value": [
"Yes, Responsible research practice (e.g., human subjects, data release)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "For self-LICEM, since the LLM provides almost everything, including embeddings and concept distributions, why do we still need an MLP head for predictions? Why not utilize the LLM itself to complete the entire framework, including prediction, like using Chain-of-thought, prompt engineer or SFT the model with the dataset? Introducing an additional MLP head seems to diminish the utility of the LLM's original LM head."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is well written and organized.\n2. The experiments are comprehensive.\n3. The proposed method is good at both prediction interpretation and classification performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces the Linearly Interpretable Concept Embedding Model (LICEM), which enhances the interpretability of neural networks by incorporating embeddings and concepts components generated by LLMs. The original LICEM trains a prediction layer to minimize both the task loss and the concept prediction loss. In contrast, the advanced self-LICEM, leveraging LLMs for concept prediction, only needs to minimize the task loss. The proposed method is good at both prediction interpretation and classification performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Although the final prediction is somewhat interpretable, intermediate steps like the concept distribution provided by LLMs remain uninterpretable, making the entire system still lack explainability.\n2. The framework is limited to classification tasks.\n3. The paper includes a user study that may expose participants to potentially harmful content, such as datasets related to depression or drugs. Therefore, an ethics statement is needed to address these concerns."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "How can you ensure that the self-generated concepts are correct?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The proposed method achieves better performance than previous methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a model, Linearly-Interpretable Concept Embedding Model (LICEM), in order to enhance the interpretability in text classification. It uses a self-supervised approach to generate human-interpretable concepts, eliminating the need for extensive labeled concept data. The experimental results prove that this model matches black-box models performance, is interpretable and can be trained without concept supervision."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "This work uses an LLM as a task predictor in Section 3.1, and produces self-generation in Section 3.2. However, LLMs' generations may contain hallucination, so the generations of LLMs should not be trusted. This work aims at increasing the interpretability, so the groundtruth is very important. It is not convincing when using one blackbox to interpret another blackbox.\n\nThe contribution seems to be limited. The performance is enhanced by using a LLM-based CEM to replace the original text encoder. However, this increasement is not surprising due to LLMs' strong ability."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024linearly,\ntitle={Linearly Interpretable Concept Embedding Model for Text Classification},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zp88xOXAfS},\nnote={under review}\n}"
},
"abstract": {
"value": "Despite their success, Large-Language Models (LLMs) still face criticism due to their lack of interpretability.\nTraditional post-hoc interpretation methods, based on attention and gradient-based analysis, offer limited insight as they only approximate the model's decision-making processes and have been proved to be unreliable.\nFor this reason, Concept-Bottleneck Models (CBMs) have been lately proposed in the textual field to provide interpretable predictions based on human-understandable concepts. \nHowever, CBMs still face several criticisms for their architectural constraints limiting their expressivity, for the absence of task-interpretability when employing non-linear task predictors and for requiring extensive annotations that are impractical for real-world text data. In this paper we address these challenges by proposing a novel Linearly Interpretable Concept Embedding Model (LICEM) going beyond the current accuracy-interpretability trade-off. LICEM classification accuracy is better than existing interpretable models and matches black-box models. The provided explanations are more plausible and useful with respect to existing solutions, as attested in a user study. Finally, we show our model can be trained without requiring any concept supervision, as concepts can be automatically predicted by the same LLM backbone."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"CBM",
"XAI",
"Interpretable AI"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/7b26f9b8604ba9ef4acd1e8d760582ce1ec3dc6d.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Linearly Interpretable Concept Embedding Model for Text Classification"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zpBamnxyPm | Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive? | main | Active | evaluations;benchmarks;scaling laws;emergent abilities;capabilities;frontier models;foundation models | foundation or frontier models, including LLMs | 5;6;6;6 | 4;3;4;3 | 2;4;3;3 | 2;3;3;3 | 3;3;3;4 | 5.75 | 3.5 | 3 | 2.75 | 3.25 | -0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "- Do you think these ideas will transfer to even larger models? I understand checkpoints for newer and larger models are usually not public."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The authors suggest alternate metrics to predict multi-choice question-answering performance than the current ones that is more predictive in real world benchmarks\n- Their alternate metrics are also general (for instance, they don't have conditions on performance being above some level) and easy to compute\n- This work should have a good impact on investigating alternate metrics for down-stream performance when pretraining LLMs"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors show that down-stream performance for multi-choice question answering can be predicted using alternative surrogates. This has a lot of importance when predicting real world model performance than simply looking at the regression loss."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- This work only focuses on multi-choice question answering, although quite general, it still is lacking when we consider even universal methods such as CoT does not directly fall into it, not to mention tool-usage etc."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Some key questions that the authors can explore are as follows:\n\nHow can their results and insights about probability fluctuations on incorrect choices generalize beyond multiple-choice benchmarks to other types of language tasks? Are there similar factors that cause unpredictability for other task formats?\n\nThe experiments focus on individual samples. Do the challenges with predicting performance due to probability fluctuations persist even when averaging over a distribution of samples? Or do the fluctuations tend to \"average out\"?\n\nThe authors discuss modeling the scaling trends for probability assigned to incorrect answer choices as a path towards better downstream predictability. How feasible do they think this approach is in practice? What challenges do they foresee? How does it scale?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The key insight that downstream unpredictability is caused by probability fluctuations on incorrect answer choices is novel and not obvious a priori. Framing the problem in terms of a sequence of transformations that degrade predictability is useful to make modular progress on the problem. The experiments are comprehensive, covering many models, benchmarks, performance metrics, and correlation measures. the authorsalso perform robustness checks to establish their claims.\n\nA key strength of the paper is the precise mathematical formulation of how downstream performance metrics are computed from pretraining log likelihoods. This formalism allows the authors to reason about how each step impacts the relationship between performance and scale.\nThe paper's experimental methodology is also comprehensive. The authors evaluate five model families on twelve diverse multiple-choice benchmarks, covering commonsense reasoning, science, math, social science, and the humanities. For each benchmark, they compute per-sample scores for three performance metrics (accuracy, Brier score, probability on correct choice) across many model scales. \n\nBeyond simply demonstrating that downstream metrics become less correlated with scale after a sequence of transformations, the authors provide a clear mechanistic explanation for why this occurs. They show in Section 5 that that fluctuations in the probability assigned to specific incorrect answer choices breaks the clean relationship between downstream performance and scale."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the reasons behind why predicting downstream capabilities of large language models has remained challenging despite well-established scaling laws for pretraining performance. The authors show that the process of computing downstream performance metrics like accuracy from pretraining log likelihoods involves a sequence of transformations that progressively degrade the statistical relationship between performance and scale. They identify the key mechanism causing this degradation: downstream metrics depend not just on the probability mass assigned to the correct answer choice, but also on how probability mass fluctuates on the specific incorrect answer choices. The probability assigned to incorrect choices does not have a strong predictable relationship with scale. The authors argue this explains the comparative unpredictability of downstream performance and suggest paths forward, like modeling the scaling of probability on incorrect choices. They also advise on designing more predictable downstream evaluations tightly coupled to pretraining log likelihoods. Overall, the paper provides valuable insights into the factors affecting downstream predictability and guidance on improving evaluation of frontier models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper focuses exclusively on multiple-choice question-answering benchmarks. While the authors justify this focus, multiple-choice has limitations as an evaluation format. It would strengthen the paper to discuss how the insights might generalize to other types of benchmarks like free-form language generation. Are there analogous factors that could cause unpredictability in other settings?\n\nThe experiments focus on predicting performance for individual samples. In practice, aggregate performance over a distribution is often of interest. The authors should also discuss if fluctuations on incorrect choices \"average out\" over a distribution, and if there are still challenges predicting aggregate performance.\nThe guidance on designing more predictable evaluations, is very limited. The authors should expand on this point with more details and examples of predictable benchmark designs."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "\"All the scores we discuss are per-datum.\" / \"Firstly, this negative log likelihood is not computed in expectation over a corpus;\" -> What does this mean? What would it even look like to do the log likelihood in expectation over the corpus?\n\nIs it fair to say that one of the main takeaways of your paper is that for best results, you would need to \"model the joint evolution of probabilities across all tokens, not just the correct value\" (and this is what you do in Section 6)?\n\nYou say \"one must predict not just how probability mass concentrates on correct choices with scale, but also how probability mass fluctuates on incorrect choices with scale.\" -> would it be more fair to say that you think that predicting how probability mass fluctuates on incorrect choices *may* enable better predictions, but you don't really know by how much? Unless I'm missing something, you don't have strong evidence that it would improve things substantially? The results from Figure 3 seem already quite strong without this additional modeling?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The main strength of the paper is that the story is quite simple and is strongly supported by the experiments. It is also timely and a matter that has many downstream implications.\n\nThe empirical evaluation seems to be quite comprehensive and technically sound.\n\nThe authors also provide both a compelling mechanistic explanation as to why the probability mass on the incorrect choices matter, and actionable insights for the field moving forward in light of these results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates why predicting downstream performance of large language models has remained challenging, despite the relative predictability of pretraining loss. The authors analyze multiple model families across various benchmarks and identify one factor which reduces the predictability of benchmark performance: people tend to focus on scaling laws looking at benchmark accuracy scores, rather than further upstream metrics such as the model logprobs on the correct choice (without renormalization to valid options)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While the findings are interesting, I'm a bit confused as to why the authors did not try to use their insights to attempt novel predictions of performance on benchmarks, and just limited themselves to measuring correlations. \n\nAlso, I think I'm a bit confused about one of the takeaways: they say to use p^Vocab, but AFAICT in section 6 they also argue that to do even better in predicting benchmark scores, one would need to model the joint evolution of probabilities across all tokens. Do the authors think that the evolution of probabilities across other tokens is a large contributing factor to having good scaling laws? From the results from Figure 3, it seems like we already have very large correlations when using log p^Vocab?\n\nTo broadly recap, I think most of my confusion stems from the disconnect between the promising analysis using correlations, and the lack of actual predictions using their best technique (log p^Vocab), showing how they fit the performance that is actually obtained by models. It may be that I'm misunderstanding something."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Figure 2 [bottom left]: The peak of the distribution is narrow, and reaches at most 7% samples. It seems unlikely that that area under the distribution curve covers 100% of the samples. What exactly is this graph denoting, if not the per-sample correlation distribution of *all* samples?\n - Moreover, since curves here are most similar to the green illustrative example [top left], the complementary CDF is expected to flatten out to the very right end (close to 1 on the x-axis), but no plots for experiments in the paper seem to have that pattern. Why is that?\n- How do these findings relate to the objective mismatch problem: that the pre-training objective does not align with the evaluation objectives?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- Insightful analysis of the degradation of predictability of metrics of interest with model scaling, highlighting the need for other scale-predictable indicators of performance.\n- Extensive empirical evaluation spanning a wide range of model families and benchmark datasets.\n- Generally, a well-presented paper."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a case study demonstrating that measures of downstream performance do not correlate strongly with the scale of large language models, unlike measures of the pre-training performance. The study is conducted on multiple-choice question answering benchmarks, evaluated on downstream metrics like accuracy, brier score, etc with the pre-training metric being the probability mass on the correct choice. The key insight is that correlation with scale progressively reduces as the downstream metrics become increasingly non-linear functions of the pre-training metric. The reason is said to be the lack of consideration of the probability masses on the incorrect choices, which is then shown to be strongly correlated with the (predictable) pertaining metric."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper seems to draw some obvious conclusions in places, possibly highlighting an overarching drawback with using correlation-based analysis. Correlation measures the degree of *linear* relationship between two variables. Consequently, in the equation block around L357, it is known a priori that if $Corr(\\text{Compute}, \\log p_\\theta^{\\text{Vocab}}(\\text{Correct Choice}))$ is high, then dropping the $\\log$ is bound to reduce correlation statistic.\n\nThis highlights another potential oversight in the analysis: \n- Takeaway #1 and #3 rightly suggest thinking about scale-predictable metrics. The study highlights what transformations make metrics unpredictable, in particular, from $\\log p_\\theta^{\\text{Vocab}}(\\text{Correct Choice}) \\to p_\\theta^{\\text{Choices}}(\\text{Correct Choice})$. With the same reasoning, in conjunction with the first point above, it is highly likely that $\\log p_\\theta^{\\text{Choices}}(\\text{Correct Choice})$ will be scale-predictable although $p_\\theta^{\\text{Choices}}(\\text{Correct Choice})$ is not.\n - Experiments like those in Figures 3 and 4, but for $\\log p_\\theta^{\\text{Choices}}(\\text{Correct Choice})$ will be insightful to study this, and in line with the overall takeaways of the paper. I am willing to update my score depending on the author's response to this point in particular.\n\nNitpicks: \n- Equation (2): should have a $\\propto$.\n- $N$ is overloaded, in Equation (6) as well as under compute budget calculations."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "What makes predicting downstream capabilities of frontier AI models with scale difficult?"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024why,\ntitle={Why Has Predicting Downstream Capabilities of Frontier {AI} Models with Scale Remained Elusive?},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zpBamnxyPm},\nnote={under review}\n}"
},
"abstract": {
"value": "Predictable behavior from scaling advanced AI systems is an extremely desirable property for engineers, companies, economists and governments alike, and while a well-established literature exists on how pretraining performance scales, predictable scaling behavior on downstream capabilities remains elusive. While many factors are certainly responsible, this paper shines a light on a significant factor that makes predicting scaling behavior on widely used multiple-choice question answering benchmarks challenging and illuminates a path towards making such downstream evaluations predictable with scale. Using five model families and twelve well-established multiple-choice benchmarks, we show that downstream performance is computed from negative log likelihoods via a sequence of transformations that progressively degrades the statistical relationship between performance and scale. We then reveal the mechanism causing this degradation: downstream metrics require comparing the correct choice against a small number of specific incorrect choices, meaning accurately predicting downstream capabilities requires predicting not just how probability mass concentrates on the correct choice with scale, but also how probability mass fluctuates on specific incorrect choices with scale. We empirically study how probability mass on the correct choice co-varies with probability mass on incorrect choices with increasing compute, suggesting that scaling laws for \\textit{incorrect} choices might be achievable. Our work also explains why pretraining scaling laws are commonly regarded as more predictable than downstream capabilities and contributes towards establishing scaling-predictable evaluations of frontier AI models."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"evaluations",
"benchmarks",
"scaling laws",
"emergent abilities",
"capabilities",
"frontier models",
"foundation models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/d6e44a223b6af37e6a0f1fc32ec80f502aa875d8.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive?"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zpDGwcmMV4 | How Can Language Models Learn from Mistakes on Grade-School Math Problems | main | Active | pretraining;language model;error correction;error detection | interpretability and explainable AI | 5;6;6;8 | 3;4;4;3 | 3;4;3;4 | 2;3;3;3 | 2;3;4;3 | 6.25 | 3.5 | 3.5 | 2.75 | 3 | -0.229416 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "See weaknesses above"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper is very well written. Although the results are dense, the authors did a good job summarizing and condensing the main takeaway messages.\n\n- The studied problem is interesting: We still don’t fully understand how to pretrain LLMs for effective reasoning from the ground up. This paper explores self-correction within the pretraining phase, a fresh perspective that hasn’t been widely explored in the literature, aside from a few works like Quiet-STaR.\n\n- The experiments and ablations are well-designed. The authors clearly state their research questions early on and address them in a logical sequence."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper improving math reasoning in LLMs by adding error-correction data directly into pretraining, instead of using the usual multi-round prompting---that dominant self-refine approach. Using a synthetic math dataset, the authors show that training with examples of mistakes followed by corrections leads to better reasoning accuracy, even beating models trained on error-free data. The study tries to answer some interesting question: how to prepare error-correction data, is finetuning sufficient to learn self-correction or is pretraining necessary and whether how this approach compares to beam search. Although on a synthetic and a very controlled setup, the results present some fresh perspective into pretraining LLMs to do better revisions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My main concern with this paper is the uncertainty around whether these findings will generalize to practical LLM pretraining scenarios. I'll expand on some specific limitations below:\n\n- The types of problems in the i-GSM dataset don’t reflect real-world reasoning tasks. Specifically, every reasoning step required here is limited to a single type of computation—finding the next node in a directed acyclic graph (DAG) and calculating its value. But what about other reasoning types where models need to compare values, handle ambiguity, or apply commonsense knowledge? Although I appreciate the focus on math reasoning, can the authors confidently assert that these results will apply to more complex, realistic reasoning tasks?\n\n- Current LLMs struggle with error detection, as the authors note in the introduction. However, their findings in L259-260 suggest that error detection can be effortlessly embedded within the model’s internal states. This may be due to the task’s synthetic nature, where the model could have learned to encode specific errors, like “skip step,” in its parameters. But this is unlikely to generalize to other errors. For example, could the model's hidden states reliably detect other error types, like incorrect calculations?\n\n- The paper’s fine-tuning experiment is limited to simple LoRA tuning. What about full fine-tuning using a fraction of the pretraining data? The authors mention (L484-485) that the cost of full fine-tuning would match pretraining with retry data, but this wouldn't hold if we fine-tune with just a fraction of the pretraining data. Would the results remain consistent in that scenario?\n\n- I’d expect more discussion on inference-time algorithms and their impact on performance. If I’m following correctly, most experiments use greedy decoding or occasionally beam search. It would be insightful to understand how additional inference-time resources—like generating more tokens or applying consensus voting—might affect error correction. Result 4 (concerning model retries) is based on greedy decoding; how would this result change with sampling?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I would like to hear the authors' thoughts on the inference-time scaling properties of model trained with \"retry\" data: given a fixed inference-time computation budget, is it better for model to produce reasoning chains with retry (more tokens to reach final answer, but average accuracy is higher) or without retry (less tokens to reach final answer, but average accuracy is lower)?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "1. I like how the authors approach this problem in a data-centric, rigorous way: controlled experiments on synthetic data. \n2. Experiments are solid"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the impact of training language models with \"error-then-retry\" format data on the reasoning performance. With experiments on a synthetic GSM8k-style math reasoning datasets, the authors conclude that pretraining with such data improves reasoning accuracy than on same amount of \"error-free\" normal data, and LoRA fine-tuning with such data does not help."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper mainly experiments with one type of error: inserting a wrong parameter that cannot be computed next. While this is easy to implement, it would be hard to simulate all kinds of errors that language models can make in real-world reasonings scenarios. This cast doubts on how the suggested approach can be deployed to train LMs on non-synthetic data. If the authors could provide some discussion on how the method can be generalized to different errors (e.g., math calculation error, context misunderstanding, ...) in a scalable and controllable way, that would be very promising."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "The paper would benefit significantly from a reorganization into a structure that aligns more closely with the familiar flow of this venue. Specifically, it would help to present the experimental settings early in the paper, followed by a dedicated focus on results and analysis. Additionally, the authors could simplify the tables to better highlight key findings, and consider using plots to present some of the results for improved clarity and reader comprehension."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper proposed an interesting idea by incorporating error-correction data directly in pretraining, rather than relying on post-generation prompting for correction with empirical evidence of improved performances. \n- The authors compared different experimental settings, such as “retry upon regret,” masking, and pretraining versus fine-tuning with retry data."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper aimed at improving language model accuracy in reasoning tasks (grade-school math) by pretraining with \"error-correction\" where errors are followed by immediate corrections, to teach models self-correction as they generate outputs. The authors show that pretraining with error-correction data boosts reasoning accuracy, without multiple rounds of prompting."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper has unclear justification for synthetic data choice.\n- Writing and the structure of the paper is very unclear and hard to follow. For example, the experimental setting or conclusion section is not contained in the paper.\n- While the paper presents extensive experimental results, the paper primarily focuses on using one model, it is unknown if such techniques are generalized to different models (e.g., LLAMA, Gemma, Mixtral etc.). The paper further lacks comparisons to existing methods."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Will the experiment code and data be open sourced?\n- In section 3.1, as the can_next is “a linear classifier on top of its hidden states”, is the one who “knowing can_next(A)=false” actually the linear classifier rather than the model itself? Does the author mean that because the hidden states of the model show a distribution that can be accurately predicted by the classifier, the model \"knowing can_next(A)=false\"?\n- It would be appreciated if the authors provide some insights about why “the slightly more complex retry miss data does not improve accuracy by much”"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- The experiments are well-designed and rigorously conducted.\n- The finding that “even when model is pretrained on retry data with high error rate, it does not tend to produce erroneous steps” is interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work focus on understanding the usefulness of “error-correction” data into the pretraining stage. The experiment results show that error-correction data can improve the mathematical ability of the model more effectively than error-free data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Some previous work[1] has already pointed out that error-correction data enables LLM to achieve higher reasoning accuracy compared to error-free data.\n- This work only evaluates on IGSM dataset, which is a synthetic data. It would be more convincing to also experiment on realistic mathematical datasets.\n\n[1] Learning from mistakes makes llm better reasoner"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024how,\ntitle={How Can Language Models Learn from Mistakes on Grade-School Math Problems},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zpDGwcmMV4},\nnote={under review}\n}"
},
"abstract": {
"value": "Language models have demonstrated remarkable performance in solving reasoning tasks; however, even the strongest models still occasionally make reasoning mistakes. Recently, there has been active research aimed at improving reasoning accuracy, particularly by using pretrained language models to \"self-correct'' their mistakes via multi-round prompting. In this paper, we follow this line of work but focus on understanding the usefulness of incorporating ``error-correction'' data directly into the pretraining stage. This data consists of erroneous solution steps immediately followed by their corrections. Using a synthetic math dataset, we show promising results: this type of pretrain data can help language models achieve higher reasoning accuracy directly (i.e., through simple auto-regression, without multi-round prompting) compared to pretraining on the same amount of error-free data. We also delve into many details, such as (1) how this approach differs from beam search, (2) how such data can be prepared, (3) whether masking is needed on the erroneous tokens, (4) the amount of error required, (5) whether such data can be deferred to the fine-tuning stage, and many others."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"pretraining",
"language model",
"error correction",
"error detection"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/d26994c4774f71750d3df1ac11d929b1e1093b8c.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/70399206f9e928113c66e8ba722ee375ae60fcfb.pdf"
},
"title": {
"value": "How Can Language Models Learn from Mistakes on Grade-School Math Problems"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zpENPcQSj1 | Generalizing Reasoning Problems to Longer Lengths | main | Active | length generalization;learning to reason;length extrapolation | other topics in machine learning (i.e., none of the above) | 5;5;8 | 4;3;4 | 3;3;3 | 2;2;3 | 2;3;3 | 6 | 3.666667 | 3 | 2.333333 | 2.666667 | 0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- From what I understand from Appendix D, we see that output[k] is not always exactly equal to input[k+1] (e.g., in the addition examples). I think this is some sort of processing that is not necessarily generalizable to other tasks because it uses the knowledge of the task too much (I personally see it an engineering hack). Is there a way to avoid that and be more agnostic towards the task?\n- Similar to the concern above, In Appendix D, we see some orange lines that are not predicted by the model and are manually inserted. I think this again reduces the learning part of the paper and is an engineering trick. Can it be avoided?\n- I'm not sure if I completely understand how the padding works. When you pad, you still have blank tokens at the start and end of the sequence. However, they are not the middle of any interval. So how are they predicted? Also are the padding tokens provided for all the examples in Appendix D?\n- In experiments, the length generalization ability is tried on different lengths and for all of them we see 100% performance. So I wonder what's the point that we see some reduction in the performance? Why haven't you tested the length generalization ability on longer sequences?\n- The experiments are done with specialized Transformers. I wonder what would be the length generalization performance if CoT steps were learned by a typical model, e.g., GPT2 or llama?\n\nMinor feedbacks: \n- From what I understand, it's not important where the intervals other than the anchor interval are located, is that right? I think it would be nice if this is further clarified in the paper. \n- I think Theorem 3.1 is doing more harm than good. I think the statement that is saying \"we have infinite continuations\" is more or less intuitive. Further, I think modeling discrete token space with continuous intervals is not justified. Also, why is Lipschitz assumption on these intervals a reasonable property? (and in that case what are the Lipschitz constants?) By default, we expect the distance of the tokens to be more or less the same, however, the Lipschitz assumption goes hand in hand with the Euclidean distance on the intervals which is problematic. For example, if tokens A, B, C correspond to points -1, 0, 1 on the interval. The Lipschitz assumption implies that the semantic distance between (A, B), (B, C) is smaller than (A,C). So I think the Lipschitz property and the use of continuous intervals in this Theorem is unjustified. I would really suggest to use a simpler theorem on discrete token space.\n- The abstract and intro imply that authors have found a necessity condition for length generalization and thus I think they should be revised. \n- I find the proof sketch part very interesting, and I'd suggest authors to further elaborate on it. (In that case, you may want to move some of the experiment details to the appendix)."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper tackles the important problem of length generalization on reasoning tasks. It provides a condition for CoTs such that if that condition is satisfied length generalization becomes potentially possible. Theoretical results on the expressively and empirical results on learning are provided in the support of that condition. \n- The experimental results seem very interesting and strong (e.g., division is a newly considered task) although there are concerns (see below). \n- The multi-line scratchpad/CoT and its implementation are interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a framework for studying the length generalization ability of scratchpad/chain-of-though (CoT) methods for reasoning problems such as arithmetic. More precisely, a condition, $(n, r)-consistency$, has been proposed such that if CoT satisfies this property, one can design a model capable of length generalization (an approximation/expressivity result). The paper further provides experimental evidence showing that the proposed CoTs are indeed learnable."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- One of the main weaknesses is the modeling assumption. Generally, the CoT setting works as follows that when we have given $S_0$ as input to the language model (LM), the LM would generate a sequence of thoughts, $S_1, S_2, \\ldots, S_T$ that would solve the task (answer is often in $S_T$). However, the modeling in this paper is that we give $S_0$ to the model and get $S_1$ then we give $S_1$ as input to the model to get $S_2$ and so on. So instead of $model(S_0)=S_1, S_2, \\ldots, S_T$ we have $model(S_i)=S_{i+1}$. I think this modeling is not a significant issue, it is just an unconventional way (which may work better that the the conventional way) that should be properly acknowledged, explained, and clarified throughout the paper. Currently the paper doesn't acknowledge this difference with normal CoT methods. Further, this problem becomes more significant in two ways:\n - The model doesn't learn the termination condition. In other words, it's not the model that understands $model(S_{T-1})=S_{T}$ is the final step and it shouldn't compute $model(S_T)$, but this is manually controlled by the user. \n - It seems some further processing are done on the inputs and outputs. In particular, it seems input[k+1] is not exactly output[k]. \n\nAll of these processings and manual handlings seem like engineering hacks which would not generalize to other problems. (Although length generalization on tasks such as addition are important as evaluation metrics, however, we are more generally looking for approaches that would generalize to other problems as well -- in particular to problems that we have less control over the data.)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "**Improvements** (that would make me raise my score, but may not be possible within the short rebuttal time)\n\n1. A significant overhaul / rewrite to emphasize that the main contribution is not a theory of length generalization and identification of a novel root cause (the root cause is well known in machine learning, mathematics and philosophy), but a definition of a problem class that has favorable properties w.r.t. length generalization built in, and these properties can be exploited by transformers.\n2. If the focus of a major revision lies on the theory (though I personally would recommend leaning more towards the empirical side), then the problem class of (n,r) consistent problems needs to be better understood theoretically. E.g., how does it relate to commonly known complexity classes such as regular languages or $TC^0$? Is this class likely to be complete, i.e., all problems where transformers can length generalize must be (n,r) consistent (which I personally doubt), and if not, what are counterexamples?\n3. Strengthen the point of how the current theoretical understanding can help design learnable CoT schemes that generalize. The empirical examples in the paper are good, but how much did the theoretical understanding contribute? Could it help to come up with a less manual process or procedure to design CoT schemes? How? Currently, the manual decomposition / rewriting of the original problem does all the heavy lifting.\n4. The notation in Sec. 3.3 is quite tedious to parse - a figure would really help (probably all of Def. 3.2 could easily be shown graphically).\n5. Theorem 3.6 is fine (there always exists a transformer that can solve the CoT problem), but what about learnability (via SGD)? How do we find that transformer in practice? Is that always easy or only sometimes, and if so, when?\n\n\n**Questions:**\n\n1. What can be said theoretically about the complexity of (n,r) consistent CoT schemes? How does the number of CoT steps scale with problem complexity and problem length (e.g. the CoT scheme for multiplication seems to not scale very well).\n2. How would one use the insights from the paper to design reasoning systems that can reason in many settings, or even at LLM / Frontier model scale? Currently the CoT format for each task is very different, and it is unclear that there is any synergy between learning different tasks simultaneously with a single model. The very different CoT schemes may even hurt performance when learning many tasks simultaneously.\n3. L 186: Why is it theoretically important that the lengths are the same?\n4. How can one in general determine if a problem is (n,r) consistent? Only by finding a valid decomposition (CoT scheme)? How does one find n and r in practice?\n\n\n**Minor comments:**\n1. L 133: “Note that we do not define CoT formally” - this is a weakness for a theory paper and should ideally be fixed.\n2. L 136 (Problem statement): The text says “performs well“ which is very informal, whereas the mathematical statement seems to say “performs perfectly in all steps (for any deterministic reasoning problem)”. This discrepancy needs to be fixed - either performing well means always the correct $S^{T+1}$, or there is some other error function that measures how well the model performs. Since this is a theory paper, being precise is important.\n3. L 170 says that “with only the continuity bias […] it is almost impossible to predict”. The continuity has never been formulated. I would rather rephrase this to: “without any complexity penalty (such as forms of continuity bias, (n,r) consistency, or forms of Occam’s razor) it is impossible to prefer one continuation over any other, and thus generalizing correctly would rely entirely on chance (with vanishingly small probability as the gap between $N$ and $N’$ grows”.\n4. L536-539: I would have liked to see that much earlier in the paper (e.g., around L176).\n5. L 251: $s_{j,l}$ and $s_{j,r}$ swapped.\n6. L 347: “We use 4 classes” - should be 5."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* Timely and important problem: length generalization even on simple tasks is a regime where transformers struggle.\n* Definition of an interesting class of length generalization problems: the class of problems where a decomposition with (n,r) consistency is possible. These problems match well onto, and are probably solvable (in theory) by, transformers.\n* Good empirical results on historically challenging problems like parity, addition, multiplication and division."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the challenge of length generalization with transformers. The paper first points out the well-known fact that the induction problem has no correct solution. For length generalization, additional assumptions are thus necessary. The paper proposes to restrict length generalization to a class of problems that can be decomposed into a set of (n) repeated subproblems that only require local context (of length r) around a set of anchor positions: (n,r) consistency. This “necessary” condition (which is a bit of a tautology because the problem class was defined according to the necessary condition) is then shown to be sufficient for the problem to be solved with transformers in theory (ignoring questions around learnability and model capacity, etc.). As the paper notes in 3.5, there is no general method to implement the process that decomposes an original length generalization problem into a “Chain-of-Thought” (CoT) formulation that can be solved by sequential execution of a number of local subproblems. But the paper correctly argues that if solving this problem reformulation such that (n,r) consistency can be ensured, transformers should suffice in principle to solve the modified problem. (n,r) consistency can thus be seen as a guiding target for CoT formulations. A small set of empirical results confirms the relevance of the theoretical result in practice. For training the CoT decomposition of the original problem is done by manually designed, problem specific algorithms, and transformers are trained to execute these algorithms of stepwise rewriting of the problem and simultaneously solving suitable subproblems during rewriting."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The CoT decomposition does most of the heavy lifting - but it is done manually. The paper makes very little progress here, except defining (n,r) consistency as a guiding principle to aim for (but with no systematic process to get there, and no theoretical understanding of how the class of (n,r) consistent decomposable problems relates to general function and complexity classes (e.g., regular languages).\n* Writing throughout most of the manuscript makes claims that are either too grand or not sharp enough, which makes the first part of the paper a bit misleading. For example, (n,r) consistency is not *necessary* to solve length generalization in general. There are loads of other approaches that have been proposed before and that are theoretically very well understood - they typically use different biases, most commonly low-complexity biases, to answer the induction problem (such as Solomonoff Induction or Bayesian inference for example). What is correct is that when restricting to problems that allow for a (n,r) consistent decomposition, length generalization can be solved uniquely without requiring additional biases. I will be more specific in the Questions section. \n* Learnability and generalization with transformers in practice remain unclear (bounded capacity, finite data, SGD). The CoT decomposition works on the simple examples shown, but it is not clear whether this would hold at scale (the number of (n,r) consistent CoT steps may scale very badly for some problems); or even how to do this - e.g., how would one train a transformer that can solve all tasks in the paper simultaneously. How would one go about eventually reaching the scale of modern frontier models?\n\n**Verdict**\nOverall I think the research in the paper is on a good track, but the current theoretical understanding and presentation is lacking a bit. Large parts of the paper sound like the paper makes a fundamental theoretical contribution that solves length generalization once and for all (such as “a theorem to show the LG problem’s root cause”, or a ”necessary [theoretical condition] to resolve it.” ). Actually the “root cause” (theorem (3.1)) simply points out the very well known and much discussed fact that the induction problem (of which length generalization is one instance) has no correct solution (unlike logical deduction) only plausible or highly likely continuations, and that additional inductive biases are needed (commonly used are low-complexity biases such as Occam’s razor, formalised in many different ways). The “necessary” condition introduced in Sec. 3 is nothing but a restriction to a subset of length generalization problems (the condition is not necessary in general, it simply defines a problem class for which a solution strategy and uniqueness of the solution can be proven). I would strongly prefer if the paper were rewritten to acknowledge this (from the very beginning and throughout), by saying: here is a formal definition of a problem class for which length generalization with transformers can be proved theoretically, and empirical results align well with the theory. For a theory paper I would then like to see more theoretical understanding of this problem class; which length generalization problems can be massaged to fall into this class, and what can be said about the process to do this (can we bound the number of CoT steps in a meaningful way, how does the problem class relate to other well known complexity classes, etc). Alternatively the theory can be left lightweight as is, and the empirical part could be strengthened - but since the CoT decompositions must be manually designed, this is quite tedious. Overall I think the work is ready for presentation and discussion (and notions similar to (n,r) consistency have popped up recently in other works regarding length generalization), but the best format currently, I think, is a workshop. This is not meant to discourage the authors (after all the empirical results on addition, parity, and multiplication are nice - but they are mainly due to handcrafting), but I think a much stronger version of the manuscript is possible with a bit more time and a major revision."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "How robust is the approach to errors in the steps of the CoT? Can you conduct some experiments to study this and report in the final paper?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "LG is an important problem that demonstrates the power of transformers to learn procedures over inputs of arbitrary size. \n\nThe paper presents a sound theory that characterizes the root cause of the problem and gives a sufficient condition for its solution. \n\nThe experimental results are compelling and illustrate the successes and failures of generalization that depend on the (n,r)-consistency."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses the problem of generalizing chain-of-thought (CoT) reasoning to arbitrary lengths, which is called Length Generalization (LG). The previous work has only shown the effective generalization of CoT over inputs of the same length. \nThe paper first gives a theorem characterizing the root cause of the LG problem - namely there exist infinitely many Lipschitz-continuous extensions of a Lipschitz-continuous function g over N inputs. This implies that LG cannot be achieved in general without some restrictions. It then proves a condition called (n-r) consistency under which LG can be achieved. The paper gives experimental results that show that algorithms for parity, addition, multiplication, and division are learned by transformers with CoT representations that satisfy (n-r)-consistency."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The CoT training seems to be quite tedious for complex procedures such as multiplication. \n\nTypo:Def 3. ... are r-length."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This paper first introduces a theorem to show the length generalization (LG) problem’s root cause, highlighting what is necessary to resolve it. It then proposes and proves a sufficient condition to solve LG"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024generalizing,\ntitle={Generalizing Reasoning Problems to Longer Lengths},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zpENPcQSj1},\nnote={under review}\n}"
},
"abstract": {
"value": "Length generalization (LG) (or length extrapolation) is a challenging problem in learning to reason. It refers to the phenomenon that when trained on reasoning problems of smaller lengths/sizes, the model struggles with problems of larger sizes or lengths. Although researchers have proven that reasoning can be learned if the intermediate reasoning steps (also known as chain-of-thought (CoT)) are given in the training data, their studies only apply to within a given length (interpolation), while LG is about extrapolation beyond the given length. This paper proposes an LG theory. It first introduces a theorem to show the LG problem’s root cause, highlighting what is necessary to resolve it. It then proposes and proves a sufficient condition, called (n, r)-consistency, under which LG can be achieved. Specifically, the theory says that if the CoT representation of a class of reasoning problems can satisfy the condition, LG is achievable for the class of problems. In the\nexperimental evaluation, we present CoT representations based on the proposed theory to learn to solve challenging reasoning problems like arithmetic, parity, addition, multiplication, and division using a Transformer to achieve perfect LG."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"length generalization",
"learning to reason",
"length extrapolation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/07ed45fa3f408af9d4277b591c17edb6bb4aa37a.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/be05a105219c7533db1c39d5adde4bd7c8567fce.zip"
},
"title": {
"value": "Generalizing Reasoning Problems to Longer Lengths"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zpLcZ2AyDK | GraphIC: A Graph-Based In-Context Example Retrieval Model for Multi-Step Reasoning | main | Active | In-context learning;multi-step reasoning;thought graphs;large language model | foundation or frontier models, including LLMs | 3;5;5;6 | 4;3;3;3 | 3;3;2;3 | 3;3;3;3 | 2;3;1;3 | 4.75 | 3.25 | 2.75 | 3 | 2.25 | -0.927173 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Can you estimate the extra inference and time cost for GraphIC? For other retrieval baselines, they do not need to create the thought graph. \n2. In Figure 4, the performance keeps increasing with more in-context examples. Do you think it will benefit from more than even more examples? For example, on MBPP, can 16-shot outperform the CEIL baseline?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper uses a formalized reasoning representation to construct a thought graph for complex reasoning problems. Based on that, it can better model the underlying reasoning process than the semantic representation of natural language. \n2. It enhances the graph embedding by the personalized PageRank and establishes a probabilistic model for the thought graph. GraphIC retrieves in-context examples by selecting top-k candidate examples that can maximize the probability of generating the correct thought graph. The probabilistic method can better capture the examples that can enhance the reasoning of the new query. \n3. The paper verifies the proposed method on diverse reasoning benchmarks with multiple training-free and training-based retrieval baselines, and the results demonstrate its effectiveness. \n4. It further conducts a comprehensive ablation study on each component and investigates the symmetry of different retrieval methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on improving the selection of the in-context examples. It proposes GraphIC, which leverages the graph-structure and Bayesian Network to select in-context examples for complex reasoning tasks. Experiments on three types of reasoning tasks (math, code, and logical reasoning) demonstrate GraphIC outperforms the other ICE selection methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. GraphIC relies on the thought graph, which is generated by formalized reasoning representation from LLMs. How to ensure the correctness of the thought graph of candidate examples? Will it be multiple possible thought graphs for the same query? Will these factors affect the robustness of GraphIC? \n2. For a test query q, GraphIC first creates the thought graph G^q without the ground-truth answer and retrieve in-context examples to maximize the probability density p_i(X^q). This also assumes that the thought graph G^q is correct. What if the thought graph has an incorrect reasoning process? Will it mislead the retrieval and thus affect the performance?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Can you please explicitly state, e.g., what joint the BN is representing? What exactly is a \"graph of thought\"? Can you please provide a brief, clear, and self-contained description of the proposed method?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The proposed method seems to produce marginally better results than the baselines in most cases."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes GraphIC, a graph-based method for in-context example retrieval aimed to multi-step reasoning tasks. GraphIC models CoTs as \"tought graphs, and uses Bayesian networks and personalised pagerank for selecting in-context examples with similar underlying graph structures. Empirical results show marginal improvements or competitive results on a wide array of reasoning tasks (GSM8K, AQUA, MBPP, ProofWriter).\n\nBayesian Networks (BNs) are a common way to represent complex joint distributions over sets of variables; however, this work never formalises which joint distribution the BNs in this work aim to represent. Furthermore, in Eq. 2, it is not really true that \"commonly, in BNs, $p(x_i | pa(v_i)) = g(dist(x_i, \\hat{x}_i))$\" -- BNs aim at representing dependence distributions between variables, like \"rainy weather\" and \"risk of aqua planning\", and it's not really clear what a distance function between those variables should represent.\n\nFurthermore, the paper is based on the construction of \"thought graphs\", but it's not really clear what these are -- are they task dependent? How do they look like? Given a taks, how does someone create a \"thought graph\"? The paper also says that \"to facilitate computation, we further represent the vertex attribures as the BERT embedding of corresponding text\" -- what is going on exactly?\n\nThen, my understanding is that the BN defines a distribution over \"thoguht graphs\", and that the distribution over $\\hat{x}_i$; what does $\\hat{x}_i$ represent? (E.g. in Eq. 8)\n\nResults seem to be marginally better than the baselines in most cases, but it's really hard to understand what's going on in the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "It is very hard to understand what's going on in the proposed method -- for example, the method uses Bayesian networks but the paper never explicitly states which join distribution the Bayesian network aims to represent."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* In the 'Estimation of Model Parameters' paragraph, the rank of W is set to 1, there is no theoretical basis for this hypothesis, and how much precision loss is caused?\n* As mentioned in weakness, LLM is probably hard to construct thought graph, do you try more complex datasets, such as widely used mathematical MATH dataset [1]?\n* How accurate is the LLM at generating a thought graph? \n* Multiple generations of the LLM may produce different thought graphs, how much will this affect the results?\n\n[1] https://github.com/hendrycks/math/"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* The motive of the paper is reasonable and the method proposed is novel.\n* Writing of this paper is good, with reasonable structure.\n* The experiments are relatively abundant, and the experimental results can prove the conclusion of the paper."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies how to retrieve demonstration examples more effectively in in-context learning to improve the performance of in-context learning. Specifically, the paper proposes a method named GraphIC, which establishes thought graphs and models them as Bayesian networks, then retrieves demonstration examples that make the probability density of queries' bayesian networks maximum. Experiments show the effectiveness of the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* Some parts of the method section of the paper lack some details, there are many assumptions but no conditions, refer to questions.\n* Method relies on LLM to construct a thought graph, which may be difficult or inaccurate to decompose key steps for complex problems.\n* The lack of experiments on the thought graph, in my opinion, is an important part of the method and has a big impact on method performance, refer to questions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "NA"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See the Weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The idea of selecting ICE with reasoning structure rather than semantics is novel and inspiring, and the human cognition-inspired methodology of capturing the underlying reasoning process with thought graph and BN is interesting.\n2. The inference of the proposed method is solid, which makes the method more reasonable.\n3. The authors conducted extensive experiments and deeper analysis to demonstrate the effectiveness of the proposed method and its components."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studied in-context example selection for ICL, and proposed a novel GraphIC method to capture reasoning processes for ICL in multi-step reasoning. GraphIC first generated thought graphs for candidate examples and query with LLM to encode the reasoning steps, and then employed a BN with PPR mechanism to studied the associated parameters reflecting the underlying reasoning structure. Finally, GraphIC selected the candidate examples with the parameter maximizing the probability of thought graph for the query in ICL. The authors conducted extensive experiments, and the results demonstrated that GraphIC outperformed both training-free and training-based baselines across various multi-step reasoning tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. There are some concerns on the method design.\n- According to the prompts in the Appendix, the proposed method seems to exploit the answer to generate the thought graph for both examples and the query. So where does the answer come from? If the answer is fetched from the annotations, it seems to lead to label leakage. If the answer is generated by the LLM, how to ensure its consistency to real solution?\n- As the authors have mentioned in section 3, BN works on the DAG. However, there exist loops in thought graphs for codes which violate the DAG assumption as shown in Figure 3.\n2. The presentation of the manuscript could be further improved especially for the confusing and unclear contents.\n- The authors should explain the formal definition of the thought graph more clearly. Maybe one example could help. For example, what does one vertex and edge mean in the thought graph (does one vertex mean one step), how can it capture the reasoning processes, and how is the corresponding text attribute got. If the text is that in Figure 3, I do not think it contains much useful information. Besides, it would be better to explain the vertex, edge and random variable of BN in section 3 more clearly.\n- Related works on BN should be cited properly.\n- The insights behind the parameter W should be explained more clearly. Why the parameter can reflect the reasoning structure, and why the probability computed with parameter of the example and graph of the query can reflect their similarity on reasoning?\n- The insights of the matrix B in PPR is confusing. What does the matrix mean and why can the matrix realize the retracing. Perhaps the authors could provide one example.\n- The author mentioned that the asymmetry of the proposed method, so I wonder the advantages of the asymmetry compared with symmetry.\n3. Some concerns on the experiments.\n- The authors could add the LLM without ICL as one baseline to demonstrate the improvements more clearly.\n- It would be better to add experiments to compare the computation time of the proposed method and other ICE selection baselines.\n- In Figure 4, it seems that the proposed method does not work very well with 1-4 shot. It would be better to make clearer explanation.\n- It would be better to add studies on the effects of lambda."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024graphic,\ntitle={Graph{IC}: A Graph-Based In-Context Example Retrieval Model for Multi-Step Reasoning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zpLcZ2AyDK},\nnote={under review}\n}"
},
"abstract": {
"value": "In-context learning (ICL) enables large language models (LLMs) to generalize to new tasks by incorporating a few in-context examples (ICEs) directly in the input, without updating parameters. However, the effectiveness of ICL heavily relies on the selection of ICEs, and conventional text-based embedding methods are often inadequate for tasks that require multi-step reasoning, such as mathematical and logical problem solving. This is due to the bias introduced by shallow semantic similarities that fail to capture the deeper reasoning structures required for these tasks. We present GraphIC, a novel approach that leverages graph-based representations of reasoning processes, coupled with Bayesian Networks (BNs) to select ICEs. Graph structures inherently filter out shallow semantics while preserving the core reasoning structure. Importantly, BNs capture the dependency of a node’s attributes on its parent nodes, closely mirroring the hierarchical nature of human cognition—where each thought is shaped by preceding ones. This makes BNs particularly well-suited for multi-step reasoning tasks, aligning the process more closely with human-like reasoning. Extensive experiments across three types of reasoning tasks (mathematical reasoning, code generation, and logical reasoning) demonstrate that GraphIC outperforms both training-free and training-based models in selecting ICEs, excelling in terms of both effectiveness and efficiency. We show that GraphIC enhances ICL’s performance and interpretability, significantly advancing ICE selection for multi-step reasoning tasks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"In-context learning",
"multi-step reasoning",
"thought graphs",
"large language model"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/0495b8038a91b4bd246c897a70c60e03fa874a86.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/9715dbe4a85041a377d5bd265561e2533f1cf098.zip"
},
"title": {
"value": "GraphIC: A Graph-Based In-Context Example Retrieval Model for Multi-Step Reasoning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zpX0teJu9Z | Geometry-Informed Neural Networks | main | Active | geometry;implicit neural representation;neural fields;theory-informed learning;geometric deep learning;physics-informed neural networks;generative design | learning on graphs and other geometries & topologies | 3;5;5;6 | 4;2;3;3 | 2;2;2;2 | 2;2;2;3 | 2;2;4;3 | 4.75 | 3 | 2 | 2.25 | 2.75 | -0.648886 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- How robust is the method to the choice of hyperparameters?\n- Would it be possible to demonstrate results on more than one example for engineering design?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "+ Paper is well-written and is easy to follow.\n+ Overall framework is meaningful and has a large number of potential applications (in particular in generative design and shape optimization).\n+ The promise of representation that does not require significant amount of data is very valuable specifically because the amount of data in generative design is often scarce or requires expensive simulations. \n+ Diversity measure that is demonstrated to work in practice for tackling mode collapse in implicit field learning is a potentially a critical contribution."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work introduces a novel framework for data-free geometry learning. The framework relies on implicit geometry representation - which is a modulated conditional neural field - where conditioning is on latent variables to ensure diversity. Given such a representation, authors propose to formulate a constrained optimization problem, in practice written down as a set of differentiable losses. The proposed constraints include: minimizing genus, smoothness, and diversity of generated shapes. Authors conduct a series of toy experiments to validate the model, and test it on a single engineering design problem."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- (minor) Most of the proposed constraints have been developed in previous work. \n- (minor) For the method to actually be useful for the real-world applications (e.g. in shape optimization in engineering fields), a mapping to an explicit representation could be a hard requirement to be compatible with existing simulation. \n- Although the representation/optimization framework that does not require data has promise, it would be interesting to see if these methods can be used in combination with available data. \n- The experimental evaluation is very limited. In particular for shape optimization (which is the only truly realistic test iiuc), there is only single example provided, which makes it hard to understand how much of the results is due to parameter tuning."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Although the authors have attempted to provide some formulations in Table 1, I still feel that these modules are far from sufficient to support the design of a workpiece. However, designing new constraints requires extreme caution and a significant amount of skill. Therefore, I am not particularly optimistic about this section. If the authors could propose a more universal paradigm for generating such constraints, many of my concerns would be alleviated."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "This paper presents a well-formulated approach for shape-generative modeling driven by geometric constraints and objectives. It starts by considering the problem from a theoretical perspective and successfully translates it into an executable framework. The research problem addressed in this paper is valuable, and the preliminary experimental results provided by the authors demonstrate the effectiveness of their proposed method. Additionally, the paper is well-written and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a framework for achieving controllable generation of industrial part geometry. Specifically, the authors utilize an implicit neural field to represent the surface of industrial parts. Through optimization objectives in different aspects, the implicit neural field is adjusted to closely match the given targets, such as ensuring the zero-level set of the implicit field aligns with the geometric surface and the first-order derivatives of the implicit field align with the surface normals. Additionally, the authors attempt to introduce a regularization term to promote diverse optimization results. Finally, the authors conduct experiments on a challenging example and demonstrate the effectiveness of their approach."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The framework solution proposed in the paper is effective from a high-level perspective, but I anticipate numerous challenges when it comes to practical implementation. My primary concern is whether we truly have the capability to account for all objective functions based on individual intuition, especially considering that these functions must be feasible, differentiable, and ideally, non-conflicting. For instance, how should we approach the generation of a nut that matches a specific type of screw, or a particular joint bearing?\n\nMoreover, while the article emphasizes its data-free approach, it appears to me as an optimization-based strategy, with the optimization target being the implicit field represented by the neural network—a relatively conventional method. I do not find this particularly innovative.\n\nAdditionally, although Figure 6 does showcase a variety of workpieces, I doubt the compliance of these pieces with standard usage requirements; many seem to be diverse merely for the sake of diversity. \n\nLastly, the experiments are conducted on only one workpiece, which, despite its complexity, does not suffice to demonstrate the universality of the method."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "I’m not very familiar with the specific field (engineering design) of this paper. In my understanding, the idea of using objectives and constraints as loss functions is not completely new in the field of 3D generative models. But this paper has extended this idea with many more kinds of constraints and proposed the diversity constraint specifically for the engineering design problem. My only doubt is whether there are indeed no existing works to be compared with. I'd like to increase my rating if the authors can give more explanation on this."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- Although there have been a number of existing works using some similar losses to train their network, as mentioned in the related work, this paper presents the GINN framework with a very comprehensive summary and discussion on the common constraints used in the validation experiments. This would help the following works in formulating their own applications.\n- Comparing to the physics-informed neural networks or the topology optimization, this paper proposes the diversity constraints for geometry problems. Comparing to other generative models such as boltzmann generators, the proposed diversity constraints help to avoid mode-collapse."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents the framework of GINN, which formulates the objectives and constraints of geometry problem as the loss function to train the neural networks. More importantly, it proposes a diversity constraint, which avoids the mode-collapse of other generative models and promotes the model to generate diverse solutions for the geometry problem. In the experiment section, it validates the performance of GINN on four problems and additionally conducts an engineering design case study. It is able to find the accurate solution for the famous geometry problems, and produce diverse solutions for the engineering design problem."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- My main concern is about the technical novelty and the evaluation. Although this paper has presented many validation experiments, most of the results only show the performance of the GINN and some ablation settings, without comparing to other existing methods. In Section 4 it claims that there’s no established baselines, problems, and metrics for the data-free shape-generative modeling. Does it mean that there is no related work trying to solve the engineering design problem presented in Section 4.3??\n- I understand that there are very much information to be presented in the paper. But it is a bit difficult to capture the key information. For example, the “topology” “smoothness” “surface sampling” are the defined metrics for each constraint? It’s better to have “constrained optimization”, “metrics”, “models” in bold font, and list all the metrics under the heading. The same problem exists in Section 4.3, there are too many headings to understand the structure of the writing."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "No concerns"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "N / A"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper proposes an interesting idea that is worthy of further exploration. The research proposes the idea of a framework, but it does not demonstrate a working instantiation of the idea, so the research is only in its beginning stages. I think the concept is potentially very promising, but there are also predictable and significant obstacles that must be overcome. Design by optimization is a longstanding topic in many fields, but there are also many problems. Most notably, there aren't many or even any good examples of where optimization alone can give rise to interesting designs. Most of the time, this requires a good initialization that is already a design or the combination of user input (design) and optimization. For example, in architectural design you typically need an initial surface and you can design a pattern on the surface (e.g. a paneling). In furniture design, you also typically need to restrict the design space to a meaningful subset, e.g. by providing procedural rules or templates. At the moment, the method is an extension of an idea that does not have a working instantiation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes geometry-informed neural networks. The idea is to train a generative model not from data, but using objective function and constraints. Basically, the generative model is trained by a specification similar to an optimization problem."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The major weakness of the paper is the lack of meaningful designs. Without interesting design examples, the paper is not meaningful. None of the designs shown in the paper are interesting and they cannot be recognized as mechanical, biological, architectural, ... objects. These are abstract designs like abstract art, and it is not meaningful to create art by optimization and then infer how this would transfer to engineering. I would suggest a three-step approach to tackling this problem:\n1) The project should identify an example where a single interesting design can be created by optimization. This example should not be abstract but specific to an engineering problem and be recognizable as an intentional and meaningful design. This example can be generated by optimization alone. It doesn't matter if the design is from architecture, mechanical engineering, biology, geology, ... but it should be meaningful. If you want to go for something more discrete, I would recommend furniture or CAD designs. This may not work well with your framework, so I could imagine that free-form architecture could be a better application area, e.g. \"Geodesic patterns\", developable surfaces, self-supporting surfaces, or quad meshes.\n2) The project should expand from this single design to generate a set of diverse designs by combining optimization with a diversity constraint.\n3) The project should example from a set of designs generated by optimization to combining optimization with generative modeling.\n\nThe current submission does not have a demonstration of either 1, 2, or 3. Competing work (not cited) has at least somewhat of a demonstration of points 1 and 2 (e.g., \"Fit and Diverse: Set Evolution for Inspiring 3D Shape Galleries\"), but it would be desirable to have even better examples. \n\n\nIt is possible to accept the paper for conceptual novelty, but I am not in favor of such a philosophy. A paper should introduce a conceptual novelty and at the same time introduce a working realization, not only the conceptual novelty. The conceptual novelty by itself should not be enough for publication. I conjecture that the reason why there aren't many other papers on this topic is because people could not find meaningful and working instantiations of this concept. There is an obvious approach: first generate a set of objects by optimization (e.g. furniture) and then train a generative model on this set. I am not aware of such a successful approach and a paper in this space should possibly demonstrate that it can beat such a baseline.."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce GINN -- a framework for training shape-generative neural fields without data by leveraging design constraints and avoiding mode-collapse using a diversity loss."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024geometryinformed,\ntitle={Geometry-Informed Neural Networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zpX0teJu9Z},\nnote={under review}\n}"
},
"abstract": {
"value": "Geometry is a ubiquitous tool in computer graphics, design, and engineering. However, the lack of large shape datasets limits the application of state-of-the-art supervised learning methods and motivates the exploration of alternative learning strategies. To this end, we introduce geometry-informed neural networks (GINNs) - a framework for training shape-generative neural fields *without data* by leveraging user-specified design requirements in the form of objectives and constraints. By adding *diversity* as an explicit constraint, GINNs avoid mode-collapse and can generate multiple diverse solutions, often required in geometry tasks. Experimentally, we apply GINNs to several introductory problems and a realistic 3D engineering design problem, showing control over geometrical and topological properties, such as surface smoothness or the number of holes. These results demonstrate the potential of training shape-generative models without data, paving the way for new generative design approaches without large datasets."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"geometry",
"implicit neural representation",
"neural fields",
"theory-informed learning",
"geometric deep learning",
"physics-informed neural networks",
"generative design"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/e56944e79911707d85311ab85e7e776889635458.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on graphs and other geometries & topologies"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/9bc7c4e4788b162b62689e8320f730b54bc973d7.zip"
},
"title": {
"value": "Geometry-Informed Neural Networks"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zq1zTgSBro | SPEAR: Receiver-to-Receiver Acoustic Neural Warping Field | main | Active | Spatial Acoustic Effects;Receiver-to-Receiver;Neural Warping Field | applications to computer vision, audio, language, and other modalities | 3;3;5;10 | 4;4;3;4 | 2;3;2;4 | 1;2;2;4 | 2;3;3;4 | 5.25 | 3.75 | 2.75 | 2.25 | 3 | 0.050443 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. Could the authors elaborate on potential methods to reduce the dense sampling requirement? Would techniques like data augmentation be feasible for this purpose?\n\n2. Are there specific changes or enhancements that could allow SPEAR to handle multi-level environments with varying elevations?\n\n3. For real-time applications, what optimizations could be implemented to further reduce inference time without sacrificing accuracy?\n\n4. Could the authors provide a comparison with conventional RIR-based approaches, detailing the trade-offs in accuracy, efficiency, and data requirements?"
},
"rating": {
"value": 10
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "1. The receiver-to-receiver formulation of spatial acoustics is innovative, providing a new paradigm in spatial audio modeling that does not rely on prior knowledge of acoustic properties.\n2. Methodologically rigorous, supported by a blend of theoretical analysis and robust experimental results across diverse datasets (synthetic, photo-realistic, real-world).\n3. The overall flow and structure of the paper are clear, with detailed explanations of each stage in the model’s development, including physical principles and architecture specifics.\n4. SPEAR’s method addresses a gap in the field, making spatial acoustics modeling more accessible for real-world applications in complex environments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces SPEAR, a novel neural warping field model designed to predict spatial acoustic effects in a 3D environment with a single stationary audio source. Unlike traditional source-to-receiver models requiring prior knowledge of room acoustics, SPEAR operates from a receiver-to-receiver perspective, allowing it to predict how audio would sound at different spatial positions using only discrete audio recordings at various receiver positions. This framework is trained using synthetic, photo-realistic, and real-world datasets, demonstrating significant flexibility and generalizability across different environments. The paper's contributions include a new problem formulation, a theoretically supported neural architecture guided by three physical principles, and comprehensive experimentation showing SPEAR's accuracy and efficiency over baseline methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Sampling Density Requirement: A dense sampling of receiver positions is currently required for SPEAR to achieve optimal accuracy. This requirement may limit its scalability in highly variable environments.\n\n2. Positioning Constraint: SPEAR assumes all receiver positions lie on the same horizontal plane, which could restrict applications in multi-level or irregular environments. Addressing this limitation would extend the model’s utility."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* SPEAR learns the propagation of sound from one receiver (reference) to the second receiver (target). So to map all potential (reference, target) positions in the room is also combinatorically complex (as in the RIR case). Thus:\n * How much more efficient this method is to the previous ones (RIR-based / NAF)?\n * Please explain more clearly the benefits (or tradeoffs) between SPEAR and source-to-receiver modeling methods. In other words, why does the latter require prior space acoustic properties knowledge and SPEAR does not? (this is not clear in both the introduction and not in “related work” section)\n * How dependent is the method on the specific source signal? If the signal is narrowband (single frequency), then the method would not be able to estimate other frequencies.\n* There are two very significant claims in lines 162-165:\n * Warping transform is multiplicative in the frequency domain\n * The acoustic neural warping field is independent on audio content\nWhile you refer to the appendix for proof and discussion, I would suggest adding some coarse and intuitive explanation to why that is the case.\n* Equation (5) - there are cases where the transfer function H may be zero for some frequencies. Then the relations in equation (5) would not hold. How do you handle these cases? (in the experiments you mention that you use a Sine Sweep signal but this is not clear at this stage of the paper)\n* Equation (7) is a generalization of eq. (5): thus the determinant condition applies to equation (5) as well, but it is not mentioned.\n* In your SPEAR training you fix the position of the source. If we would like to use SPEAR with a source that is located in a different position (and in a different room), we would have to train it again in the new setting, correct? If so, this should be mentioned clearly\n* What applications could benefit from this method (especially compared to alternatives)?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The method can be applied relatively easily since it does not require a lot of knowledge about the environment\n * It is relatively original since most other methods require more information about the environment or a more complex recording setup\n * Quality is somewhat unclear (see comments below)\n * Clarity and significance could be improved (see comments below)"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose a novel method for estimating the warping field that corresponds to sound propagation between two points inside a fixed environment. The method applies for a stationary source and two moving receivers (microphones). The receivers are synchronized, and thus the transfer function between the reference to the target receiver can be estimated from recordings in the frequency domain."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The method estimates the ratio between two transfer functions and this is prone to ill conditioning - however, it is not fully addressed. In the experiments they simply use clipping and zeroing to handle such cases, but how this affects performance is not clear\n* The strengths and weaknesses of the method are not sufficiently clear\n* I’m finding some difficulties in understanding exactly what SPEAR is trying to learn. In line 304: “...we can obtain the ground truth warping field…” - if you can calculate it analytically and then train the model to predict it, what exactly does the model learn? Some kind of generalization to other frequencies? Interpolation to grid points that are not measured? In any case, this is not discussed in the experiments\n* Experiments section (section 4.6)\n * There is too much emphasis on visual analysis of the warping field signal in the frequency domain (Figures 4, 6, 7, 8). It is very difficult \nto understand actual performance from these graphs. In all cases it looks like the estimated warping field is very different from the ground truth\n * Fig 5 - what is the meaning of the MSE values? Again, difficult to understand something about performance from reading absolute MSE values (i.e., how bad/good is an MSE of 1.06?)\n * Table 3 - was the metric measured for the same signal? It should be noted that depending on the frequency content of the signal, estimating the warping field is limited\n* It is not clear how to use the method given some very important parameters:\n * How close should the receiver be? Is this frequency dependent?\n * What source signal to use? \n *How many grid points should be sampled in the environment in order to estimate the warping field of the entire environment? Is this even possible?\n\n**Minor comments**\n* Introduction → contribution 3: “We demonstrate SPEAR superiority…” - superiority compared to what?\n* Lines 132-139: the relation to “Time-series Prediction” is not very clear. Please explain more clearly what is the relation between SPEAR and the type of networks you mentioned.\n* Equation (2) - is p_1 and p_2 the same as p_r and p_t ? if so, please be consistent with notation\n* Lines 308-309: “For ground truth warping field supervision, we combine both L1 and L2 loss.” - can you please provide an explicit equation for the loss? Is there a weighting parameter between the L1 and L2 losses? Why did you incorporate both L1 and L2?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Several questions related to the weaknesses mentioned above:\n\n1.\tWhat is the motivation for learning the RTF where both receivers can change positions while source position is fixed (in test time maybe we would like the source to change places)?\n2.\tWhat is the relation to previous works on RTF estimation [2-4]?\n3.\tWhat is the key contribution over [5]?\n\nAdditional minor questions: \n\n4.\tWhy is the Fourier transform used instead of STFT representation?\n5.\tLine 151 - the meaning of this notation $\\mathcal{F}\\leftarrow(\\mathcal{A},\\mathcal{P})$ is unclear.\n6.\tThere is inconsistency in the notation, switching from $p_1,p_2$ to $p_r,p_t$ and the same for the wrapping field.\n7.\tProposition 2 contains an over-general statement (“existence is not guaranteed”) and the proof is vague. In general, the identifiability of the mixing matrix in (7) was investigated under the field of independent component analysis (ICA), and there are certain conditions for which the matrix is identifiable (full rank matrix and at most one Gaussian source). The wrapping field can be defined as a vector of length K that contains the wrapping field for each individual source. \n8. Line 294: \"The two input positions’ features are extracted from the grid feature by bilinear interpolation\" - please clarify how the bilinear interpolation is performed. \n9. What is the motivation for using a transformer? What does the attention between patches is expected to learn? \t\n10. The proposed method is not necessarily more realistic compared to baselines. The required data is similar to collecting massive RIR data, since RIR can be extracted when the source signal is known. \n11.\tFig 4., why NN baseline is not presented?\n12.\tThe figures order does not follow the order they appear in the text - Fig. 6 should come before Fig. 5.\n13.\tWarping Field Sensitivity to Noise - is the noise added during training? What happens in higher noise levels?\n14.\tMissing details regarding experiments – what is the reverberation time? What is the signals length? \n15.\tWhy Fig. 5.a. does not contain comparison to NN? \n16.\tAppendix D is empty\n17.\tAre there any insights regarding the characteristics of the failure cases?\n18.\tTable 3 – It is unclear why the feature dimension is 384, and what is 43 in the initial token representation. Why is there a pruning step at the output?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper presents a novel viewpoint that learns the wrapping field that connects two receivers using a new transfomer-based model. A comprehensive experimental study is conducted with both simulated and real-world data, and different aspects of the proposed method are examined. The paper is clearly written and contains meaningful illustrations that clarify its core ideas."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a method for estimating the wrapping field in an enclosed acoustic environment, which is the relative transfer function between two receivers, when given the position of the two receivers. This method is meant to replace direct room impulse response estimation which relies on prior space acoustic properties and complex computations or requires massive amount of RIRs as a supervision. The method is shown to outperform baselines on both simulated and real-world data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The usefulness of the proposed method is not well motivated. On the one hand, the wrapping field corresponds to a fixed source position, but in reality it is more informative to consider a source that can change locations. On the other hand, the space of all possible wrapping fields seems to be unnecessarily large, since it is defined by two receiver locations, whereas for RIR estimation the mapping is a function a single receiver only (and a source position). This is also evident from the vast amount of training samples that are required for training the model. Note that from these same measurements one can extract the RIR if the emitted source signal is known (which should be the case since the training recordings are performed in a controlled manner). \nNote that this idea of utilizing the wrapping field already exists in the literature for two decades. It is known as the Relative Transfer Function (RTF). A plethora of methods have been proposed to robustly estimate the RTF from measured signals (see a summary on [1] Section IV. C. 3). More close to the current paper, it was already proposed in previous works to generate RTFs based on source-receiver locations [2,3,4]. This relevant literature should be referred to in the paper, and it should be made clear what is the difference between these works and the current contribution.\nIn addition, the paper is very similar to [5], where it seems that the main difference is that the current paper deals with RTF estimation instead of RIR estimation, thus requiring an additional emitting sound at a fixed position. It should be clarified what is the main contribution of the current work compared to [5]. \n\n[1] Gannot, S., Vincent, E., Markovich-Golan, S., & Ozerov, A. (2017). A consolidated perspective on multimicrophone speech enhancement and source separation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 25(4), 692-730.\n\n[2] Wang, Z., Vincent, E., & Yan, Y. (2017). Relative transfer function inverse regression from low dimensional manifold. arXiv preprint arXiv:1710.09091.\n\n[3] Wang, Z., Li, J., Yan, Y., & Vincent, E. (2018, April). Semi-supervised learning with deep neural networks for relative transfer function inverse regression. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 191-195). IEEE.\n\n[4] Bianco, M. J., Gannot, S., Fernandez-Grande, E., & Gerstoft, P. (2021). Semi-supervised source localization in reverberant environments with deep generative modeling. IEEE Access, 9, 84956-84970.\n\n[5] He, Y., Cherian, A., Wichern, G., & Markham, A. (2024, January). Deep Neural Room Acoustics Primitive. In Forty-first International Conference on Machine Learning"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "A few minor comments based on the weaknesses mentioned above.\n\n- For the claim “to be more favorable to real-world scenarios” to be convincing, the method would need to be validated in cases where the source signal is not a sine sweep, but a more readily accessible sound like clapping (which is easy to generate and record at the receivers but difficult to acquire the original source). Furthermore, the performance under different receiver position sampling policies would need to be more rigorously validated to see how much space is covered by the receiver positions for training. In this respect, the choice of dataset in this paper could benefit from further diversification:\n\n - Pyroomacoustics was synthesized based on the image-source method in a shoebox room as stated in the paper\n\n - Sound Spaces 2.0 is also based on ray-based simulation and the office subset of Replica used by the authors is a single-room\n\n - MeshRIR is a real measurement, but the structure of the room is also a shoebox\n\n A dataset that can verify performance for non-trivial cases may be a multi-room dataset (where there could be no direct acoustic path, e.g., 'line of sight'), such as Replica's Apartment subset. In this case, other factors than the Reference-Target Distance reported in this paper may be important for sampling receiver positions. For example, if the source is in the living room of an apartment, how much detail should the training receiver be sampled to ensure performance?\n\n- Typos/misleading phrases\n - L182, L205: The part where the author expresses \"according to room acoustics (Savioja & Svensson, 2015)\" seems to assume an LTI system, but since not all room acoustics assume LTI, the expressions seem to need clarification.\n - L370 PSESQ $\\to$ PESQ\n - Figure 8 comes before Figure 6 and 7.\n - Wrong references to \"Fig. B\" (L456) and \"Fig. C\" (L464)"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "One of the strengths of this paper is its value as an attempt to alleviate the difficulty of data acquisition for sound field synthesis. It also deserves recognition for achieving large computational efficiency gains within moderate memory efficiency."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors point out that the problem of acoustic field estimation is characterized by the high cost of collecting prior information (about the physical properties such as geometries), and propose a relatively simple way to solve the problem: to estimate the receiver signal at another location from the receiver signal. They present their theoretical assumptions and backgrounds for their methodology and demonstrate its effectiveness on simulated and real recorded datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "For me, the justification for the main contribution of this study remains unclear. To point out my thoughts on the main contributions listed by the authors at the end of Section 1:\n\n1. Please clarify the reasoning behind the claim that the receiver-to-receiver framework requires less prior information than source-to-receiver (e.g. NAF).\n\n - At the end of the second paragraph of Section 1, the authors explain their motivation as collecting RIR data is exceedingly difficult in real scenarios, and their receiver-recorded audio is more readily accessible. What I don't understand about this argument is that they ended up using a sine sweep source for training, so in what way is this more efficient than training with RIR?\n\n - Let's say that the training data is not necessarily a sine sweep. One of the cases where a receiver-to-receiver framework can be beneficial, as the paper suggests, is when we don't know about the source signal. I'm curious to hear your thoughts on whether cases like this are prevalent in real-world scenarios. In my understanding, as there should be a receiver recording to train SPEAR in a certain room, one should first play the source signal to acquire the responses at each receiver point, meaning that we already know the source signal.\n\n2. The statements and proofs of the propositions in Section 3.2 seem to lack the rigor to be presented as major contributions.\n\n - What the authors describe as the claim and proof in the text for Proposition 1 is a direct consequence of Rayleigh's reciprocity theorem. Aside from the rigors in the proof of the “unique existence of warping given any pair of receivers”, there is something I don't understand about what the authors conclude in the last sentence of the proof: what does the claim “independent of the audio source” mean? Considering diffraction, the warping field should be variant depending on the source frequencies, isn’t it?\n \n - Similarly, there are several technical misleading statements throughout the paper that could be used to claim theoretical contributions, and the explanation of acoustic propagation is still underdeveloped. One example of this is the claim in Principle 1 in Section 3.3 that 'the entire 3D space is involved in the receiver recording', which could be misleading to those unfamiliar with audio. It would be better to claim instead that 'because the sound is generally more reflective than light, even signals reflected from occluded areas can be affected, and because of their lower velocity, the effect is more pronounced'.\n\n3. To claim to have revealed the superiority of SPEAR, the baseline selection is weak, and the ablation study did not sufficiently reveal the strengths and weaknesses of the proposed methodology.\n\n - At the very least, INRAS [1] should be included, and it would be nice to see other baselines such as AV-NeRF [2], DiffRIR [3], etc.\n\n - In the context of synthesizing acoustic warping fields for moving sources with a fixed receiver (which shares systematic similarities with this paper’s problem statement), WaveNet-based architectures are often used [4,5], or even MLPs [6] to estimate the frequency domain warping fields. How does the Transformer architecture bring advantages over these other architectures?\n\n - I wonder if re-training is essential for comparing performance with RIR-based source-to-receiver methodologies (including NAF). Even with keeping those source-to-receiver models as-is, we can estimate $H_{p_r}$ and $H_{p_t}$ directly, and it seems natural to be able to obtain $\\mathcal W_{p_r \\to p_t}=H_{p_t}/H_{p_r}$. Is there a reason to retrain the NAF nonetheless? Given such commutative nature of LTI systems, how is receiver-to-receiver learning systematically different from source-to-receiver learning, as one could readily get the impulse response for each of the receivers and then deconvolve one into the other?\n\n[1] Su, K., Chen, M., & Shlizerman, E. (2022). Inras: Implicit neural representation for audio scenes. Advances in Neural Information Processing Systems, 35, 8144-8158.\n[2] Liang, S., Huang, C., Tian, Y., Kumar, A., & Xu, C. (2023). Av-nerf: Learning neural fields for real-world audio-visual scene synthesis. Advances in Neural Information Processing Systems, 36, 37472-37490.\n[3] Wang, M. L., Sawata, R., Clarke, S., Gao, R., Wu, S., & Wu, J. (2024). Hearing Anything Anywhere. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 11790-11799).\n[4] Richard, A., Markovic, D., Gebru, I. D., Krenn, S., Butler, G. A., Torre, F., & Sheikh, Y. (2021). Neural synthesis of binaural speech from mono audio. In International Conference on Learning Representations.\n[5] Leng, Y., Chen, Z., Guo, J., Liu, H., Chen, J., Tan, X., ... & Liu, T. Y. (2022). Binauralgrad: A two-stage conditional diffusion probabilistic model for binaural audio synthesis. Advances in Neural Information Processing Systems, 35, 23689-23700.\n[6] Lee, J. W., & Lee, K. (2023, June). Neural fourier shift for binaural speech rendering. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1-5). IEEE."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Learn a acoustic warping field to predict one receiver's spatial acoustic effects from another receiver"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024spear,\ntitle={{SPEAR}: Receiver-to-Receiver Acoustic Neural Warping Field},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zq1zTgSBro},\nnote={under review}\n}"
},
"abstract": {
"value": "We present SPEAR, a continuous receiver-to-receiver acoustic neural warping field for spatial acoustic effects prediction in an acoustic 3D space with a single stationary audio source. Unlike traditional source-to-receiver modelling methods that require prior space acoustic properties knowledge to rigorously model audio propagation from source to receiver, we propose to predict by warping the spatial acoustic effects from one reference receiver position to another target receiver position, so that the warped audio essentially accommodates all spatial acoustic effects belonging to the target position. SPEAR can be trained in a data much more readily accessible manner, in which we simply ask two robots to independently record spatial audio at different positions. We further theoretically prove the universal existence of the warping field if and only if one audio source presents. Three physical principles are incorporated to guide SPEAR network design, leading to the learned warping field physically meaningful. We demonstrate SPEAR superiority through detailed experiments on both synthetic, photo-realistic and real-world dataset."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Spatial Acoustic Effects",
"Receiver-to-Receiver",
"Neural Warping Field"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/3bb00bb6ab6c6d22e9ff5879ea1ec43038a0938a.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/c122be2d66be663b714e1578e1d8cab031c105c5.zip"
},
"title": {
"value": "SPEAR: Receiver-to-Receiver Acoustic Neural Warping Field"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zqA19DirIT | REAL-TIME LAYOUT ADAPTATION USING GENERATIVE AI | main | Desk Reject | GenAI;SupervisedLearning;React;Web-Design;ChatGPT | applications to computer vision, audio, language, and other modalities | Sanshray Singh Langeh;Mandar Zope | ~Sanshray_Singh_Langeh1;~Mandar_Zope1 | 0 | 0 | 0 | 0 | 0 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": {
"value": "the paper does not comply with the length requirements outlined in the CFP."
},
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": {
"value": "Submission Desk Rejected by Program Chairs"
},
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Dynamic Web Layouts Driven by Generative AI and User Behavior"
},
"_bibtex": {
"value": "@misc{\nlangeh2024realtime,\ntitle={{REAL}-{TIME} {LAYOUT} {ADAPTATION} {USING} {GENERATIVE} {AI}},\nauthor={Sanshray Singh Langeh and Mandar Zope},\nyear={2024},\nurl={https://openreview.net/forum?id=zqA19DirIT}\n}"
},
"abstract": {
"value": "In modern web design, ensuring adaptability and user engagement through dynamic layouts is increasingly important. With the growing demand for personalized user experiences, traditional static web layouts are insufficient for meeting user preferences. This paper introduces an innovative approach that leverages generative AI to dynamically adapt web layouts in real-time. With the help of data that is collected under the banner of user interactions through technologies such as JavaScript and Node.js, we are able to save those interactions, which not only include the click patterns but also the timestamps, user’s name, day and date, and number of clicks.\n\nThese clicks correspond to interactions of users with different React components. This data is being stored as a CSV file, as it is easier to read when it comes to parsing it to an AI model. Once every designated cycle, the data is fed to a Python script which does an API call to the $Chat GPT 4o$ model, which then analyzes the data and rewrites the CSS to create a new web layout based on the user’s interactions. \n\nThis successfully gives a web interface that adapts its layout in real-time, which is somewhat similar to many recommendation systems of popular applications like Netflix and Amazon Prime. Its significance extends across multiple fields, as this approach can enhance user engagement by dynamically displaying components based on user interaction patterns. Additionally, it offers potential revenue growth for companies, allowing them to charge higher rates for ads strategically placed in high-engagement areas of the layout, based on inferred user data.\n\nFor example, let the number of clicks be represented as $N_c$ and the user interaction patterns as $P_u$. The revenue potential $R$ can be expressed as:\n$$\nR = k \\cdot N_c \\cdot P_u,\n$$\nwhere $k$ is a constant representing the ad placement value."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": {
"value": [
"~Sanshray_Singh_Langeh1",
"~Mandar_Zope1"
]
},
"authors": {
"value": [
"Sanshray Singh Langeh",
"Mandar Zope"
]
},
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"GenAI",
"SupervisedLearning",
"React",
"Web-Design",
"ChatGPT"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": {
"value": "langeh|realtime_layout_adaptation_using_generative_ai"
},
"pdf": {
"value": "/pdf/4787d54c4430db3d39f164937d3cb3ebee92e33e.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "REAL-TIME LAYOUT ADAPTATION USING GENERATIVE AI"
},
"venue": {
"value": "ICLR 2025 Conference Desk Rejected Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Desk_Rejected_Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
||||||||||
zqXANcFO9T | Compressed Decentralized Learning with Error-Feedback under Data Heterogeneity | main | Active | distributed training;error-feedback;convergence analysis | optimization | 1;1;3 | 5;4;4 | 1;1;2 | 1;1;2 | 2;1;3 | 1.666667 | 4.333333 | 1.333333 | 1.333333 | 2 | -0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "(**Additional References**)\n\nAlong the line of compressed decentralized algorithms for tackling data heterogeneity, I suggest the following two references that should be covered in the related works. [1] is an algorithm with large batch stochastic gradient tracking (see Theorem 4.2 of [1]) while [2] is an algorithm with constant batch stochastic gradient tracking (see Appendix B of [2]). Gradient tracking algorithms are known to converge without assuming a uniform bound on the similarity between local and global objective gradients.\n\n(**About Comparison to CHOCO-SGD**)\n\nThe paragraph **Comparison between DEFD-PSGD and CHOCO-PSGD** is confusing and here is my point of view:\n\n- I assume that the authors are referring CHOCO-SGD in [3] as equivalent to CHOCO-PSGD in Algorithm A.1. Otherwise, the comparison and discussion made between DEFD-PSGD and CHOCO-PSGD are not meaningful becuase DEFD-PSGD should be compared against CHOCO-SGD. However, it is not clear how to show the equivalence of Algorithm A.1 to CHOCO-SGD in [Koloskova et. al., 2019] or in [3]. So I request the authors to show a formal proof of equivalence explicitly. (A similar comparison between two error-feedback schemes is studied in [4], where Algorithm 1 in [4] has a similar error-feedback mechanism as CHOCO-SGD while Algorithm 4 in [4] has a similar error-feedback mechanism as DEFD-PSGD.)\n\n- CHOCO-SGD is an algorithm proved to converge under any degree of data heterogeneity, for instance, their analysis in [Koloskova et. al, 2019] and [3] only assume bounded stochastic gradient and does not impose assumption on data heterogeneity. Therefore, the claim `so that when data is highly heterogeneous, the CHOCO-PSGD may diverge` in line 389 is not theoretically supported. I suggest the authors to provide more evidence for supporting the claim or revise the discussion.\n\n- The experiment results in Figure 2 of [2] suggests that CHOCO-SGD shows slow convergence in heterogeneous distribution of MNIST. This does not match with the claim of this paper in Figure 1 which states that CHOCO-SGD cannot converge under heterogeneous data. I encourage the authors to conduct additional experiments on a similar setup of [2], to gather more insights about CHOCO-SGD and address the divergence of CHOCO-SGD with more solid evidence such as what parameter tuning is used.\n\n- The experiment result would be more convencing if a comparison is made between DEFD-PSGD and [1], [2] on heterogeneous data setting.\n\n\n(**About Data Heterogeneity**)\n- Can the authors provide more insights on how DEFD-PSGD mitigates data heterogeneity through the analysis result? For instance, please explain how does the algorithm react to difference values of $\\epsilon$ and whether that effectively tackles with the data heterogeneity error in terms of the convergence bound. \n- It would be more convincing if the experiment result can be presented with different levels of data heterogeneity (different Dirichlet parameter $\\alpha$) in the same plot and demonstrate how different algorithms react to the error of data heterogeneity.\n\n(**About Consensus Error**)\n- I suggest the authors to include a convergence bound on the consensus error in the main text and discuss how does DEFD-PSGD benefits from using an exact synchronized model gossip communication. This will provide more insight about the advantage of DEFD-PSGD, e.g., whether the dependence on $\\epsilon$ is better than other algorithms (such as Theorem 4 in [Lian et. al., 2017]) in terms of consensus error bound.\n\n[1] Haoyu Zhao, Boyue Li, Zhize Li, Peter Richtárik, and Yuejie Chi. BEER: Fast $\\mathcal{O} (1/T)$ Rate for Decentralized Nonconvex Optimization with Communication Compression, 2022.\n\n[2] Chung-Yiu Yau, Hoi-To Wai. DoCoM: Compressed Decentralized Optimization with Near-Optimal Sample Complexity, 2022.\n\n[3] Anastasia Koloskova, Tao Lin, Sebastian U. Stich, Martin Jaggi. Decentralized Deep Learning with Arbitrary Communication Compression. 2020.\n\n[4] Peter Richtárik, Igor Sokolov, Ilyas Fatkhullin. EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback, 2021."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "DEFD-PSGD is a novel algorithm in the sense that it synchronizes model parameters by applying compressed model updates, unlike prior works [Koloskova et. al., 2019] which consider applying compressed model gossip."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes compressed decentralized algorithm with the aim to control the error due to data heterogeneity. The proposed algorithm DEFD-PSGD applies compressed model updates and therefore is able to perform exact model gossip under compressed communication."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The convergence guarantee only holds for a restricted range of $\\beta$, where $\\beta$ is the relative error of contractive compressor. This is non-standard, especially along the line of work of [Koloskova et. al., 2019], [1], [2], [3] from additional references below. Can the authors explain why a restricted range of $\\beta$ applies to DEFD-PSGD? I suggest the authors to expand the current analysis to the case for any $\\beta \\in (0, 1)$, e.g., by transferring the dependence to the parameter $\\gamma$.\n- Under the context of nonconvex optimization, [3] is a more accurate reference than [Koloskova et. al., 2019] for mentioning CHOCO-SGD because [3] provided the convergence of CHOCO-SGD on nonconvex objective while [Koloskova et. al., 2019] only considered convex objective.\n\n(**Notations**)\n- In equations (2), (6), (10), $\\nabla f(\\frac{X_t {\\bf 1}_n}{n})$ should be $\\\\|\\nabla f(\\frac{X_t {\\bf 1}_n}{n}) \\\\|^2$ instead.\n- The constants $a,b,c,\\mu$ are not defined in or before Theorem 1.\n- In line 243 \"neightbors\" should be \"neighbors\"."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. Could the reviewers illustrate how to design a compressor with a high degree of compression that satisfies both unbiasedness and a bounded compression error of $\\beta\\le 1$?\n2. As in weakness 1, can the authors compare the proposed algorithm with more recent baselines, such as CEDAS?\n3. As in weakness 2, why the assumptions seem contradict with high compression degree and large data heterogeneity? Please correct me if I'm wrong."
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1. The algorithm is simple and clear.\n2. The theoretical convergence rate matches D-PSGD.\n3. The proposed algorithm performs better than other baselines in the experiments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a detentralized optimization algorithm with communication compression called DEFD-PSGD, which applies the error-feedback mechanism. Under certain assumptions, DEFD-PSGD is shown to converge exactly. In numerical experiments, DEFD-PSGD is shown to perform better than CHOCO-PSGD."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The compared baselines are sub-optimal. By comparing with baseline algorithms DCD-PSGD and CHOCO-PSGD, which are from 2019 and earlier, it's not proper to claim that DEFD-PSGD \"outperforms other state-of-the-art decentralized learning algorithms\". CEDAS in 2023 has already beaten CHOCO-PSGD by a large margin. \n2. The assumptions used in the theoretical analysis are too strong. Specifically:\\\ni) Bounded gradient divergence: To my knowledge, the aim to use error-feedback should be removing this assumption which leads to a small-heterogeneity setting. The use of this assumption seems to contradict with the aim to optimize with data heterogeneity.\\\nii) Unbiased stochastic compression & Bounded compression error: Usually compressed algorithms only need to assume either unbiasedness or $\\beta\\le1$. Assuming both is too strong, and even contradicts with \"a high degree of gradient compression\"."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Consider revising the algorithm design to eliminate the storage requirement for neighbors’ model weights. Existing algorithms avoid this need, and its inclusion here may render the algorithm impractical.\n\n2. Please consider removing either the unbiasedness or the contractive assumption. Existing algorithms typically do not require both.\n\n3. The coefficient $\\beta$ is generally a fixed constant determined by the choice of compressor. For example, random sparsification yields $\\beta = d/k - 1$, and random quantization results in $\\beta > 1$. Why, then, do you assume that $\\beta$ can be as small as you want? See your assumptions on the range of $\\beta$ in Theorem 1, Corollary 1, and Corollary 2.\n\n4. In Corollary 2, please clarify how network topology and the compression factor $\\beta$ influence the convergence rate. Additionally, please compare your convergence rate with that of [C2, C3] and the algorithms listed in Table 1 of [C3], and explicitly outline the advantages of your algorithm.\n\n5. Consider constructing compressors that simultaneously satisfy both the unbiased and contractive assumptions, and use these in your simulations. Additionally, please include more baseline comparisons beyond Choco-SGD and DCD-PSGD. Finally, clarify why your CIFAR-10 simulations yield low accuracy."
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "This paper identifies a key challenge in the existing literature: the inability to handle both a high degree of gradient compression and significant data heterogeneity simultaneously."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper investigates compressed decentralized learning in the presence of data heterogeneity, revealing limitations in existing algorithms when both gradient compression and data heterogeneity are high. To address these issues, the authors propose the DEFD-PSGD algorithm, designed to handle substantial gradient compression and data heterogeneity effectively, while maintaining communication efficiency. The authors provide theoretical analyses and validate their approach through numerical experiments."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Overall, this paper is not well-written and has several significant limitations, detailed as follows:\n\n1. The proposed algorithm is impractical due to its high memory requirements. Specifically, the algorithm requires each node to store the model weights $\\\\{x^{i,j}\\\\}_{j \\in \\mathcal{N}_i}$ for all its neighbors, as seen on the left-hand side of line 4 and the right-hand side of line 7 in Algorithm 1. This poses a significant limitation. With large models reaching billions of parameters, even a single set of model weights requires substantial memory. For instance, storing one copy of a 175-billion-parameter model demands over 350GB, making the storage of neighbors' weights prohibitively memory-intensive and leading to notable hardware inefficiencies. Most existing compressed decentralized algorithms do not require each node to store neighbors' weights; see baseline algorithms like DCD-PSGD (Tang et al., 2018) and Choco-SGD (Koloskova et al., 2019). Moreover, other compressed decentralized methods, including those listed below [C1–C3], also avoid storing neighbors' weights. Thus, this requirement presents a strong limitation of the proposed algorithm.\n\n\n2. While the authors claim that their convergence analysis is based on the most commonly used assumptions (see Contribution 2), this is unfortunately not the case. The authors assume that the compressors are both unbiased and contractive (i.e., $ \\beta < 1 $; see the last two conditions in Assumption 1), which is rarely encountered in the literature. Typically, existing work assumes compressors are either unbiased—such as random sparsification [C4] and random quantization [C5]—or contractive, with top-$ K $ compression as an example, but do not assume them to hold simultaneously. Very few compressors satisfy both unbiasedness and contractiveness simultaneously. For instance, as shown in [C6], the random sparsification compressor [C4] is unbiased but has $ \\beta = d/k - 1 $ where $ d \\gg k $, and random quantization [C5] is unbiased with $ \\beta > 2 $. Another common unbiased compressor, natural compression [C6], has $ \\beta = 9/8 > 1 $. None of these widely used unbiased compressors meet the assumptions in this paper. Consequently, the assumptions here are restrictive and unlikely to hold in practical scenarios. This is another strong limitation. The standard assumption is to assume either unbiased or contractive compressors, but not both, see Assumption 3 and 4 in [C2]. \n\n3. The presented convergence results are weak. The convergence rate in (10) does not clarify the effects of network topology and the compression factor $\\beta$. In contrast, existing literature, such as [C3], explicitly characterizes the influence of network topology and compression. Additionally, this paper appears unfamiliar with many state-of-the-art decentralized algorithms, including those listed in Table 1 of [C3]. Compared to these established algorithms, I do not observe clear advantages of the proposed approach's convergence rate. Moreover, it is important to note that the proposed algorithm relies on highly restrictive assumptions, such as simultaneous unbiasedness and contraction, along with bounded gradient dissimilarity. By comparison, [C2] and [C3] do not rely on these restrictive conditions.\n\n4. The simulations are trivial. First of all, the tested top-K and the random quantization compressors do not satisfy the simulation unbiasedness and contraction property. Second, the baselines are trivial, more advanced baselines are in [C2, C3] as well as those listed in Table 1 of [C3]. Third, it is very strange that compressed algorithms on Cifar-10 can only achieve accuracy below 70%. The typical accuracy is above 90%. \n\n[C1] Liu et. al., \"LINEAR CONVERGENT DECENTRALIZED OPTIMIZATION WITH COMPRESSION\", ICLR 2021\n\n[C2] Huang and Pu, \"CEDAS: A Compressed Decentralized Stochastic Gradient Method with Improved Convergence\", arXiv 2301.05872, 2023\n\n[C3] Islamov et.al., \"Near Optimal Decentralized Optimization with Compression and Momentum Tracking\", arXiv 2405.20114, 2024\n\n[C4] Wangni et.al, \"Gradient Sparsification for Communication-Efficient Distributed Optimization\", NeurIPS 2018. \n\n[C5] Alistarh et.al., \"QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding\", NIPS 2017\n\n[C6] Horváth et. al., \"Natural Compression for Distributed Deep Learning\", 2022"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024compressed,\ntitle={Compressed Decentralized Learning with Error-Feedback under Data Heterogeneity},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zqXANcFO9T},\nnote={under review}\n}"
},
"abstract": {
"value": "Decentralized learning distributes the training process across multiple nodes, enabling collaborative model training without relying on a central server. Each node performs local training using its own data, with model updates exchanged directly between connected nodes within a given network topology. Various algorithms have been developed within this decentralized learning framework and have been proven to converge under specific assumptions. However, two key challenges remain: 1) ensuring robust performance with both a high degree of gradient compression and data heterogeneity, and 2) providing a general convergence upper bound under commonly used assumptions. To address these challenges, we propose the *Discounted Error-Feedback Decentralized Parallel Stochastic Gradient Descent (DEFD-PSGD)* algorithm, which efficiently manages both high levels of gradient compression and data heterogeneity, without sacrificing communication efficiency. The core idea is to introduce controllable residual error feedback that effectively balances the impact of gradient compression and data heterogeneity. Additionally, we develop novel proof techniques to derive a convergence upper bound under relaxed assumptions. Finally, we present experimental results demonstrating that DEFD-PSGD outperforms other state-of-the-art decentralized learning algorithms, particularly in scenarios involving high compression and significant data heterogeneity."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"distributed training",
"error-feedback",
"convergence analysis"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/36b290d502319f06f209fdef1047a935a4f63add.pdf"
},
"presentation": null,
"primary_area": {
"value": "optimization"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Compressed Decentralized Learning with Error-Feedback under Data Heterogeneity"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zqo2eKjSWH | Stable Signature is Unstable: Removing Image Watermark from Diffusion Models | main | Withdraw | Image Watermark;Diffusion Model;AI-generated Image | alignment, fairness, safety, privacy, and societal considerations | Yuepeng Hu;Zhengyuan Jiang;Moyang Guo;Neil Zhenqiang Gong | ~Yuepeng_Hu1;~Zhengyuan_Jiang1;~Moyang_Guo1;~Neil_Zhenqiang_Gong1 | 3;5;5;5 | 4;4;4;3 | 2;2;4;3 | 1;2;3;2 | 2;3;4;3 | 4.5 | 3.75 | 2.75 | 2 | 3 | -0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": {
"value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors."
}
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Why the e-agnostic scenario takes so much time in fine-tuning? Can the authors specify the process?\n2. I’m confused about the changing of evasion rate and the bitwise accuracy with respect to $\\mu$, the weight of the adversarial loss. It seems contradictory that as $\\mu$ increases, in terms of evasion rate and bitwise accuracy, the e-aware curve and e-agnostic curve has two different trending directions. Can the authors explain it?\n3. The assumption is that the attacker can access the decoder of an open-sourced model. I’m quite confused about this setting. If the decoder can be accessed, why not just replace the decoder with the original one? It seems the application scenario of this method is not necessary in real-life? Can the authors explain it?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The approach can adapt to two different access levels. It’s flexible and close to real-world settings.\n2. The watermark demonstrates high quality in watermark removal tasks and preserves image quality. Although E-agnostic takes long time to fine-tune, the proposed method is model-targeted and therefore needs no additional processing time once fine-tuned."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a method to remove watermarks added by stable signature [1]. The method achieves high removal effect and good image quality compared to previous method MP [2]. The method builds on the assumption that the attacker can modify the parameters of the open-sourced watermarked latent diffusion model decoder. The approach consists of two steps: first, the method estimates a denoised latent vector for each non-watermarked image in an attacking dataset; then, it fine-tunes the decoder so that the generated images align closely with the non-watermarked images. In the encoder-aware scenario, the attacker has access to the model’s encoder, diffusion process, and denoising layers. In the encore-agnostic scenario, the attacker only has access to the denoising layers and decoder. The experiment results show that this method achieves high watermark removal rates with preserved image quality.\n\n[1] Fernandez, P., Couairon, G., Jégou, H., Douze, M., & Furon, T. (2023). The stable signature: Rooting watermarks in latent diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 22466-22477)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. It seems the novelty of the method is not very apparent. The method estimates the latent z and do fine-tuning on decoder based on the loss function, which is a common method in watermarking methods, see [1].\n2. It’s still not very clear to me why the encoded latent $\\hat{z}^i$ can be used to approximate the true $z^i$. Although the decoded image can be similar to the original image visually, the latents $\\hat{z}^i$ and $z^i$ may differ significantly. suppose the distribution of the latent after the denoising process of the diffusion model is $P$, and the distribution of the latent after the encoding of a clean image using the encoder is $P'$. $z^i$ is sampled from $P$, while $\\hat{z}^i$ is sampled from $P'$. It's obvious that $P$ and $P'$ is not the same, though they may be close to each other. The fine-tuning is actually a process that tries to adapt the input distribution of the decoder to $P'$.\n\n\n[1] Zhang, L., Liu, X., Martin, A. V., Bearfield, C. X., Brun, Y., & Guan, H. (2024). Robust Image Watermarking using Stable Diffusion. arXiv preprint arXiv:2401.04247."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Given that some existing methods can remove watermarks with simpler forward processes, how does your approach justify its more complex optimization steps in terms of practical use cases?\n\n2. The evaluation shows that the watermark removal performance varies under different conditions. How do you see your approach being adapted or scaled to scenarios where full access to the VAE or other internal components is not feasible?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.This paper introduces a novel method of fine-tuning the decoder of a diffusion model to remove watermarks generated during the process. It demonstrates significant improvements in watermark removal while maintaining the visual quality of the output images, showing an enhancement over the Model Purification (MP) approach.\n\n2. Comprehensive Evaluation: The study includes a thorough evaluation of the proposed attack across various scenarios (both encoder-aware and encoder-agnostic), datasets, and comparisons with existing methods like MP (Model Purification)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper evaluates the robustness of the \"StableSignature\" watermarking technique embedded in the parameters of diffusion model decoders, proposed by Meta. The authors present a new model-targeted attack that effectively removes these watermarks while preserving image quality. Their results reveal that the robustness of Stable Signature is overestimated, highlighting potential vulnerabilities in in-generation watermarking techniques."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. This paper's main finding that \"Stable Signature is not robust\" is not novel; in fact, previous methods[2,3] have already made the same discovery. Existing watermark removal technologies such as DiffPure[1] and Controlgen[2] can remove Stable Signature watermarks through a simple forward pass. This contrasts with the attack method proposed in this paper, which requires multiple optimization iterations and potentially access to model components like the VAE and watermark decoder.\n\n2. The method requires fine-tuning of the model's decoder and, in some cases, access to components such as the encoder or VAE. This requirement may not always be practical for real-world adversaries and limits the broader applicability of the proposed attack. The time overhead is extremely high, making it difficult to use in practice. If I can access the decoder of a certain watermarking method, removing the watermark through optimization iterations and image regularization is an expected outcome and lacks innovation.\n\n3. This paper can only remove a single type of watermark. \n\nReference: \n1. Diffusion Models for Adversarial Purification\n2. Image Watermarks are Removable Using Controllable Regeneration from Clean Noise\n3. WAVES: Benchmarking the Robustness of Image Watermarks"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1.As stated in the weakness, the method relies on the model. The attack need to know and replace the decoder of the employed model to attack the targeted model. But it seems impossible. It seems to need two conditions. The first is that you can get the model of employed model. The second is that you can replace the decoder. Maybe it is useful when you buy a watermarked generated model from the owner and you want to escape the watermark generation. But it still need the owner give you the all model instead of an api. \nThe author may can make some explanation about the limitation.\n2.The method seems to design for fine-tuned watermarking method such as the stable signature[1] as the author states in the title. So how this method perform to other watermarking method for diffusion models such as Tree-ring[2]. So i wonder know whether this method is only suitable for watermarking method as stable signature, or\nIt can still perform well in other methods.\n3.There are some watermark removal for invisible watermark such as [3], what is the difference between the performance?\n[1]Fernandez P, Couairon G, Jégou H, et al. The stable signature: Rooting watermarks in latent diffusion models[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 22466-22477.\n[2]Wen Y, Kirchenbauer J, Geiping J, et al. Tree-ring watermarks: Fingerprints for diffusion images that are invisible and robust[J]. arXiv preprint arXiv:2305.20030, 2023.\n[3]Saberi M, Sadasivan V S, Rezaei K, et al. Robustness of ai-image detectors: Fundamental limits and practical attacks[J]. arXiv preprint arXiv:2310.00076, 2023."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "1.This paper propose a model-targeted attack to remove watermark.\n2.The method is effective at removing watermark on a large scale.\n3.This method can preserve the quality of image."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes an novel method to remove watermarks in generated images of diffusion model. Experiments show that it can remove watermarks as the paper states."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "This method is model-targeted, so the watermark removal relies on fine-tuning model. It is a white-box method."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The authors aim to eliminate watermark information from generated images by fine-tuning the decoder. However, I have a concern regarding whether the watermark information is genuinely removed or simply hidden. Since the authors only fine-tuned the diffusion model's decoder, this fine-tuning will certainly alter the distribution of the generated images. Meanwhile, the watermark model's decoder (in the case of Stable Signature, this is HiDDeN's decoder) is not fine-tuned in tandem, which would naturally lead to a decrease in watermark extraction accuracy.\n\nIf the watermark model's decoder were also fine-tuned simultaneously, would the authors' attack still remain effective? If it were still effective, this would imply that the authors' attack merely changes the distribution rather than genuinely removing the watermark information. In such a case, defenders could train a watermark model decoder to counter this attack."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The authors clearly articulate their motivation, and the introduction and related work sections are well-structured, allowing readers unfamiliar with the field to quickly grasp the background. The problem formulation is precise, with explicit statements regarding the attacker's goals and capabilities.\n2. The experiments validate the effectiveness of the authors' E-aware and E-agnostic methods, particularly the E-aware attack on WOUAF, which achieves better FID and LPIPS scores than the \"No attack\" baseline."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a watermark removal attack targeting images generated by open-source diffusion models. The authors identify the limitations of existing in-generation watermark methods, such as Stable Signature, in the context of removal attacks. They explore two scenarios regarding the attacker's access to the encoder \\( E \\) and propose a novel two-step attack method. Experimental results demonstrate that their approach can effectively remove watermarks from diffusion model-generated images while maintaining visual quality."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The proposed method has limitations in practical applications. Both the E-aware and E-agnostic approaches require the involvement of a watermarked decoder \\( D_w \\), rendering them unsuitable for commercial closed-source watermark models.\n2. The comparative experimental setup is insufficient, especially regarding comparisons with per-image-based removal attacks. Although the authors cite several recent per-image-based attacks in Section 2.3, they only select WEvade for comparison, neglecting other diffusion-based watermark removal methods or optimization-based approaches. A more comprehensive and extensive comparison in the experimental section should be conducted."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@misc{\nhu2024stable,\ntitle={Stable Signature is Unstable: Removing Image Watermark from Diffusion Models},\nauthor={Yuepeng Hu and Zhengyuan Jiang and Moyang Guo and Neil Zhenqiang Gong},\nyear={2024},\nurl={https://openreview.net/forum?id=zqo2eKjSWH}\n}"
},
"abstract": {
"value": "Watermark has been widely deployed by industry to detect AI-generated images. A recent watermarking framework called Stable Signature (proposed by Meta) roots watermark into the parameters of a diffusion model's decoder such that its generated images are inherently watermarked. Stable Signature makes it possible to watermark images generated by open-source diffusion models and was claimed to be robust against removal attacks. In this work, we propose a new attack to remove the watermark from a diffusion model by fine-tuning it. Our results show that our attack can effectively remove the watermark from a diffusion model such that its generated images are non-watermarked, while maintaining the visual quality of the generated images. Our results highlight that Stable Signature is not as stable as previously thought."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": {
"value": [
"~Yuepeng_Hu1",
"~Zhengyuan_Jiang1",
"~Moyang_Guo1",
"~Neil_Zhenqiang_Gong1"
]
},
"authors": {
"value": [
"Yuepeng Hu",
"Zhengyuan Jiang",
"Moyang Guo",
"Neil Zhenqiang Gong"
]
},
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Image Watermark",
"Diffusion Model",
"AI-generated Image"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": {
"value": "hu|stable_signature_is_unstable_removing_image_watermark_from_diffusion_models"
},
"pdf": {
"value": "/pdf/79b993703f0157fda8e99618d489aac538898c54.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Stable Signature is Unstable: Removing Image Watermark from Diffusion Models"
},
"venue": {
"value": "ICLR 2025 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Withdrawn_Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||
zqtql1YmlS | Fewer May Be Better: Enhancing Offline Reinforcement Learning with Reduced Dataset | main | Active | Offline Reinforcement Learning; Data Selection; Grad Match | reinforcement learning | 3;5;5;6 | 4;4;3;3 | 1;2;2;3 | 1;3;2;3 | 2;3;2;3 | 4.75 | 3.5 | 2 | 2.25 | 2.5 | -0.688247 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "None"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "> **Originality**\n- Such new method is proposed to select a coreset from the raw offline dataset, which could contribute as an alternative approach in offline RL.\n\n> **Clarity**\n- Several informative figures are provided. Especially the one by t-SNE provides a straightforward way to understand the behaviour of such selection process.\n\n> **Significance**\n- In some of the settings concerned in the experiments, such method is quite efficient."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Motivated by the large size of the offline dataset as well as suboptimal data quality in offline RL, this paper considers the problem of finding a coreset out of the given dataset. The authors first formulate such problem as a task to approximate the actual gradients (from the complete dataset) in the offline training process. And a line of of results are provided to support the low approximation errors. Then the method named Reduced Datasets for Offline RL (REDOR) is proposed, inspired by the orthogonal matching pursuit (OMP). Finally, the method is compared with several baseline methods on D4RL data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "> **Quality**\n- Several assumptions in Theorem 4.1 are rather stronger than scenarios in actual implementations. One observation often seen in offline RL is the diverging gradients (if without proper training techniques), which, however, are assumed to be uniformly bounded in the paper, w.r.t parameters in respectively policies and Q-functions.\n- Despite the multi-round selection strategy introduced in Section 4.2, as long as the empirical returns are used, as depicted in equation (13), the targets in training steps are relatively fixed (in the sense of distributions due to behaviour policies), which then makes (13) no longer an approximation of Bellman backup errors. As a result, it is currently not clear if such approach would lead to a guaranteed good estimation of values/Q-functions. \n- According to what the reviewer can understand about the statements and proof for results in Section 5, the theorems only consider the proposed method defined with classic TD loss, while do not consider the techniques emphasized in Section 4.2 - 4.3. As a result, such theoretical discussion is not an actual analysis of the proposed algorithm (feel free to correct me).\n- In Line 766, within the proof for Theorem 5.2, it is not justified why $S\\^k$ can always start from the cluster center ${c\\_k}$ of gradients.\n\n> **Clarity**\n- According to the way a Q-function is defined in Line 99, some index of $t$ should be included in the notation of $Q$.\n- Horizon $H$ is not explicitly defined.\n- There is not enough information for $L\\_{\\text{max}}$.\n- There lacks for an introduction to how KRLS, Log-Det and BlockGreedy are implemented in such offline RL settings.\n\n> **Significance**\n- As explained in the 'Quality' part, the theoretical results seem not to be exactly for the proposed method."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Could you please clarify how the suboptimal datasets for MuJoCo, namely \"hard\", were generated? The paper mentioned that they were generated by adding low-quality data, but the quality or source of such data and mix ratio should also be introduced.\n\n2. Regarding $Q_\\theta$ in Algorithm 1, could you explain how $Q_{\\theta_t}$ was formulated? There is no update term in either the pseudocode or the codebase. Was $Q_\\theta$ pretrained or trained simultaneously but omitted? It would be best if the pseudocode or thorough explanations were provided.\n\n3. In Equation 14, it is stated that trajectories in the top $m%$ based on return are filtered, with $m$ set to 50, which would seem to exclude almost the entire random dataset. Could you provide the result of simply selecting trajectories with top $m (=50)\\%$ returns for comparison?\n\n4. In the codebase, it seems that in addition to the evaluation of Monte Carlo Q targets, the selection of candidate trajectories via OMP is filtered based on trajectory returns. What is the exact search space of the selected trajectories? If it is the filtered one with trajectory returns, then how can we ensure the fairness of the comparison to baselines that do not utilize such a filter?\n\n5. In the paper, the percentile $m$ is specified as 50 (Top), but in the codebase, it varies (Bottom 50, 70, and 95). Could you clarify the reason for this difference?\n\n6. In Algorithm 2, $r$ is defined as a scalar, but in Line 4, an inner product is applied. Could you kindly explain this?\n\n7. In Line 3 of Algorithm 2, the inequality appears to be reversed. Is this correct?\n\n8. Is there a reason why TD3+BC was chosen as the backbone offline RL algorithm for the MuJoCo tasks? Would using IQL, as in the Antmaze tasks, provide a more consistent comparison?\n\n9. For the MuJoCo tasks, the authors used the \"-v0\" versions, which are now outdated and differ from the more recent \"-v2\" versions. Could you explain the reasoning behind using \"-v0\"?\n\n10. For the \"Complete Dataset\" scores in the Antmaze tasks, it seems that these values are taken from the IQL paper, which does not provide standard deviations. Could you clarify how these scores were derived?\n\n11. While the baselines used in the experiments appear somewhat dated, dataset selection has recently gained increased attention in offline RL. Hence, it seems that recent algorithms should be contained as baselines. For example, \"Improving Generalization in Offline Reinforcement Learning via Adversarial Data Splitting (Wang et al., 2024)\" provides a codebase, which could allow for a straightforward comparison. Or, is there any reason why such comparisons are inappropriate?\n\n12. Could you provide more details on what is meant by the \"Complete Dataset\" baseline? Specifically, is it the original mixture of the desired dataset and the suboptimal dataset, or is it just the original dataset?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1. The method demonstrates improved performance compared to the baselines.\n2. The paper includes a theoretical analysis that provides a solid grounding for the approach."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a method for dataset selection in offline reinforcement learning (RL) using the Orthogonal Matching Pursuit (OMP) algorithm and Monte Carlo Q losses. The proposed approach selects full trajectories whose loss gradients align well with the residuals."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Some key elements of the proposed algorithm are either missing or unclear, and there are some discrepancies between the paper and the accompanying codebase. For instance, the method used to generate \"hard\" datasets is not fully discussed in the paper, and the percentile $m$ mentioned in the paper differs from that in the codebase. More details are provided in the questions below.\n\n2. Certain parts of the proposed algorithm may contain logical errors or inconsistencies. For example, in Line 4 of Algorithm 2, $r$ is a scalar, yet an inner product operation is applied to it. More details are provided in the questions below.\n\n3. The baselines chosen for comparison seem somewhat outdated, which could affect the perceived significance of the performance improvements demonstrated by the proposed method."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Q1. How is the weight $w_i$ or $\\lambda$ decided during training and the parameters $L_{max}$, $m$,$\\epsilon$ chosen in practice?\n\nQ2. Are the networks Qθ, πϕ networks first trained on the full dataset before starting with the subset selection?\n\nQ3. What is the empirical reduction percentage achieved in each dataset?\n\nQ4. In Figure 1 for the walker2d-expert-v0 environment, the reward first increases and then drops. It is also counterintuitive that the subset selected in ReDOR would perform better than a dataset containing only expert trajectories. Could the authors provide an explanation for this behavior?\n\nQ5. Q5. Could the authors elaborate more on the Prioritize baseline, what do samples with highest TD Loss mean?\n\nQ6. How does ReDOR perform on random datasets such as halfcheetah-random-v2?\n\nQ7. I could not understand Fig 3. Why are the reduced dataset points more for category 6 when it is a subset of complete dataset?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is well written and the idea is easy to follow.\n2. The idea of subset selection is novel and interesting.\n3. The paper provides both strong theoretical study and empirical analysis of the proposed method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper explores the interesting concept of finding a subset of the offline dataset to improve the performance of offline RL algorithms using orthogonal matching pursuit. The authors provide empirical and theoretical evidence of performance improvement on benchmark datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The authors characterize the field of offline RL only in terms of OOD action penalization and constraints on the behavior policy. There should also be a short discussion on model-based methods like MOPO [1] and MoERL [2], as some of these approaches have been shown to outperform model-free methods.\n\n2. Some parts of the paper are difficult to understand without prior knowledge of orthogonal matching pursuit. Specifically, how is $F\\lambda(s) = L_{max} - min_w Err_{\\lambda} (w, S, L, \\theta)$ used in the OMP.\n\n3. If I understand correctly this method may not lead to the claimed reduction in complexity, as training $Q_{\\theta}$ and $\\pi_{\\phi}$ till requires the full dataset.\n\nMinor \n\nThe table references do not match the table numbers. On line 420, I believe the authors are referring to Table 1 instead of 6.2.\n\nSuggestion : If the authors could include a notations table in Appendix it will help in readability and understanding the proofs.\n\nReferences:\n[1] Kidambi, Rahul, et al. \"Morel: Model-based offline reinforcement learning.\" Advances in neural information processing systems 33 (2020): 21810-21823.\n[2] Yu, Tianhe, et al. \"Mopo: Model-based offline policy optimization.\" Advances in Neural Information Processing Systems 33 (2020): 14129-14142."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Additional experiments:\n- Does simply filtering the dataset by high returns recover the same performance?\n- What is the performance of ReDOR on the original version of D4RL? One might expect that reducing mixed quality datasets like medium-expert, or medium, could also result in a high performance. \n\nMissing experimental details:\n- How is the hard dataset generated? How many datapoints are added to the dataset?\n- How many datapoints are removed by ReDOR? What is the size of the reduced datasets?\n\nGeneral:\n- Is there a way to tune the resulting dataset size? \n- Is Fig 3, episode return = 99.5 for behaviors [2-7] correct or a bug?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- This is an interesting and novel approach for data selection in RL. The high-level approach/formulation of the problem may be useful as a foundation for extensions. \n- Strong results on a modified version of D4RL, and the unmodified antmaze."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors introduce an approach for reducing the size of a dataset for offline RL by defining this reduction as a submodular set cover problem and using orthogonal matching pursuit. The resulting algorithm is evaluated on a modified version of D4RL locomotion tasks and the original antmaze tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There is a discrepancy between the proposed objectives and the resulting objectives that makes me question where the effectiveness of the proposed approach comes from. \n\nThe problem is initially defined as finding a subset of the data which results in a higher performing policy than the policy determined by training on the original dataset (Eqn 3). However, this is immediately discarded for another optimization problem, which instead tries to limit change in the value function (Eqn 5). While discovering a smaller dataset which achieves the same performance as the original dataset is an interesting problem, the authors claim in several places (and demonstrate) that their reduced dataset actually improves the performance. So where does the performance gain come from? \n\nOne possible cause for the performance increase is how the evaluation is done (add noisy/low performing trajectories to the D4RL dataset) and the filtering of low performing trajectories (Eqn 14). I would be very curious if this filtering alone is sufficient to also recover the performance of the algorithm. This concern, along with some missing key experimental details, makes me cautious about the experimental claims made in the paper. \n\nMissing References which also filter the dataset using returns:\n- [1] Chen, Xinyue, et al. \"Bail: Best-action imitation learning for batch deep reinforcement learning.\" Advances in Neural Information Processing Systems 33 (2020): 18353-18363.\n- [2] Yue, Yang, et al. \"Boosting offline reinforcement learning via data rebalancing.\" arXiv preprint arXiv:2210.09241 (2022)."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Contruct the reduced dataset to improve algorithm performance while accelerating algorithm training."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024fewer,\ntitle={Fewer May Be Better: Enhancing Offline Reinforcement Learning with Reduced Dataset},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zqtql1YmlS},\nnote={under review}\n}"
},
"abstract": {
"value": "Research in offline reinforcement learning (RL) marks a paradigm shift in RL. However, a critical yet under-investigated aspect of offline RL is determining the subset of the offline dataset, which is used to improve algorithm performance while accelerating algorithm training. Moreover, the size of reduced datasets can uncover the requisite offline data volume essential for addressing analogous challenges. Based on the above considerations, we propose identifying Reduced Datasets for Offline RL (ReDOR) by formulating it as a gradient approximation optimization problem. We prove that the common actor-critic framework in reinforcement learning can be transformed into a submodular objective. This insight enables us to construct a subset by adopting the orthogonal matching pursuit (OMP). Specifically, we have made several critical modifications to OMP to enable successful adaptation with Offline RL algorithms. The experimental results indicate that the data subsets constructed by the ReDOR can significantly improve algorithm performance with low computational complexity."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Offline Reinforcement Learning; Data Selection; Grad Match"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/83a26b58235708febe7b478ab44b34e1aa977577.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/08ab7a91d5a4b09978def2b9a2702874904b288e.zip"
},
"title": {
"value": "Fewer May Be Better: Enhancing Offline Reinforcement Learning with Reduced Dataset"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zqzsZ5cXbB | Let the Code LLM Edit Itself When You Edit the Code | main | Active | code generation;efficiency;large language model;code assistant | applications to computer vision, audio, language, and other modalities | 1;5;6;6 | 4;2;4;3 | 1;2;4;3 | 1;2;4;3 | 3;3;3;3 | 4.5 | 3.25 | 2.5 | 2.5 | 3 | -0.365636 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Could you provide performance results for a baseline that reuses the pre-edit context without modification for making suggestions? This zero-cost approach would be helpful to compare against Full-recomputation in your benchmark."
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The paper effectively outlines the real-time editing problem and clearly describes the mathematical foundation for PIE based on rotary positional encoding."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a technique to update the rotary positional encoding when a small part of the context tokens are updated. It aims to optimize computational efficiency in real-time editing scenarios. The authors show that by fixing the positional encoding alone, they were able to retain a performance that is almost as good as fully re-encoding the context on a left-to-right code generation task, but with considerably less computational cost."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Limited Technical Novelty**: The mathematical derivation is relatively straightforward, stemming directly from rotary positional encoding's relative nature, without additional innovation or complexity.\n\n**Unrealistic Setting for Interactive Editing**:\n * Random Edits Only: The experimental setup evaluates PIE on random edits, which does not align with realistic real-time editing workflows, where temporally or contextually related edits are more common (e.g., editing a function signature and then updating its call sites). PIE’s simplifications may therefore be less applicable to typical usage patterns.\n * Single-Edit Evaluation: The paper evaluates PIE in a single-edit scenario, overlooking the potential for accumulated errors in multi-edit settings. In practical applications, users often make multiple edits, which could introduce drift in the positional encoding without full recomputation.\n * Left-to-Right Only: Evaluations are limited to left-to-right generation, omitting fill-in-the-middle (FIM) generation, a task relevant in code editing where users may modify code segments in the middle of sequences. Without this, it is unclear how PIE would perform in varied editing tasks.\n\n**Unconvincing Conclusions Due to Limited Evaluation Scope**: Given the unrealistic evaluation settings, the claim that PIE can retain performance by adjusting positional encoding alone is unconvincing. By testing only on random edits, the experiments fail to address cases where contextual dependencies (e.g., edits that affect other tokens' relevance) might demand full recomputation. This risks overstating PIE’s applicability to real-world editing scenarios. To build a more compelling case, I recommend:\n * Evaluating PIE on real user edit sequences rather than synthetic random edits.\n * Restricting comparisons with full recomputation to cases where edited tokens impact final target tokens meaningfully (e.g., by verifying that removing or masking these edited tokens affects the target token prediction likelihood).\n * Including a special baseline where the pre-edit context is reused without modification, establishing a zero-cost apporach for comparison.\n\n**Lack of Multi-Edit and Accumulated Error Analysis**: With each edit, additional errors will enter the encoding under the proposed technique, but the paper provides no analysis of error accumulation across multiple edits. Without such discussion, it’s unclear when a full recomputation might be needed to reset the encoding.\n\n**Lack of Fill-in-the-Middle Evaluation**: Evaluations are limited to left-to-right generation, omitting fill-in-the-middle (FIM) generation, which is more relevant to the interactive coding assistant scenarios mentioned by the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Can you please add evaluations for 1 more dataset and 1 more model?\n2. How does the approach do for other non-code related tasks where semantic relationship is important?\n3. How does the approach do for longer edits?\n4. Is there any memory overhead of the proposed approach?\n5. Does this approach lead to any cumulative errors if you continue to update KV cache based on PIE?\n6. The average scores of 3 experiments are reported, could you also report the standard deviation?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is well-written and easy to understand.\n2. The authors solve an important task of efficiency in updating KV cache in a real-time code editing setting. This is crucial for interactive coding assistant scenario where the developers make frequent and incremental changes to the exisiting code and require copilot to correctly predict the next line on the fly.\n3. The authors perform experiments on 1 dataset for 3 tasks and show 85% reduction in computational overhead compared to brute-force approach."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents Positional Integrity Encoding (PIE) as an inexpensive alternative to re-encode the entire KV cache during LLM decoding specifically for code related tasks. PIE solves the temporal confusion task efficiently using a single round of matrix-multiplicaton. The authors provide results on RepoBench dataset using 3 different sizes of DeepSeek-Coder model for 3 tasks- code insertion, deletion and multi-place editing. The results show that PIE reduces computational overhead by 85% compared to full-recomputation without significant loss of performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The results are limited to 1 dataset and 1 model. Including more than 1 dataset and model would make the claim more strong.\n2. The authors solve an important task of efficiency of real-time code editing but do not discuss the limitations of this approach for other tasks where semantic impact is large or in case of large code edits.\n3.The approach has a dependency on RoPE and might not be suitable for other models without RoPE"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Can the PIE be effective on code generation tasks?\n\n2. Why not consider the Pass@1 metric?\n\n3. Is PIE equally valid on other models (Qwen-2.5-Coder; Llama-3.1; Yi-Coder)?\n\nI am more concerned about the third of the above issues. If the supplemental results do show the validity of PIE, I will raise my score."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "I really enjoy this paper!\n\nThe Positional Integrity Encoding (PIE) introduced by the authors capitalizes on RoPE, adeptly addressing temporal disorientation by initially stripping away the rotary matrices responsible for confusion and subsequently reinstating the appropriate matrices through straightforward matrix multiplication.\n\nThis capability to enhance computational efficiency without compromising accuracy is precisely the straightforward yet potent approach we value in the realm of language model optimization.\n\nThe PIE not only paves the way for future research in optimizing Large Language Models (LLMs), particularly focusing on efficiency, but also excels in real-time dynamic scenarios. Its compatibility with existing acceleration techniques positions PIE as a catalyst for further advancing the practical deployment of LLMs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper describes a new approach called Positional Integrity Encoding (PIE) that aims to improve the efficiency of Large Language Models in real-time code editing scenarios.\n\nPIE improves the accuracy and efficiency of predictions by solving the problem of the computational overhead associated with recoding the entire context when editing existing code."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My current concerns are regarding the selection of downstream tasks and evaluation metrics considered by the authors.\n\n(1) The tasks of code insertion, code deletion, and multi-place code editing that the authors have considered seem less critical and common in actual development scenarios compared to code generation.\n\n(2) The chosen evaluation metrics, EM (Exact Match) and ES (Edit Similarity), may not accurately assess the semantic correctness of the generated code.\n\n(3) The selection of models is limited to the DeepSeek-Coder series."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Have you conducted more experiments on more settings and models?\n- Is it possible to provide an in-depth *theoretical* analysis (in 4.3) of the cause of performance drop between PIE and the recomputation?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The proposed algorithm significantly reduces the latency of generation with edited prefixes.\n- A straightforward solution to adjust the rotary matrices to correct temporal confusion."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes Positional Integrity Encoding, an algorithm to correct rotary positional encoding in the KV cache of displaced tokens due to edits. It shows significant speed-up compared to full recomputation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The algorithm looks somewhat trivial to me. It simply corrects the position embedding by transforming the rotary matrix in RoPE.\n- Limited experiments. All performance experiments were conducted on RepoBench-C-8k and DeepseekCoder. It is unclear if this algorithm generalizes to other settings and models."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024let,\ntitle={Let the Code {LLM} Edit Itself When You Edit the Code},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zqzsZ5cXbB},\nnote={under review}\n}"
},
"abstract": {
"value": "In this work, we investigate a typical scenario in code generation where a developer edits existing code in real time and requests a code assistant, e.g., a large language model, to re-predict the next token or next line on the fly. Naively, the LLM needs to re-encode the entire KV cache to provide an accurate prediction. However, this process is computationally expensive, especially when the sequence length is long. Simply encoding the edited subsequence and integrating it to the original KV cache meets the temporal confusion problem, leading to significantly worse performance. We address this efficiency and accuracy trade-off by introducing $\\underline{\\textbf{P}\\text{ositional}\\ \\textbf{I}\\text{ntegrity}\\ \\textbf{E}\\text{ncoding}}$ (PIE). Building upon the rotary positional encoding, PIE first removes the rotary matrices in the Key cache that introduce temporal confusion and then reapplies the correct rotary matrices. This process ensures that positional relationships between tokens are correct and requires only a single round of matrix multiplication. We validate the effectiveness of PIE through extensive experiments on the RepoBench-C-8k dataset, utilizing DeepSeek-Coder models with 1.3B, 6.7B, and 33B parameters. Our evaluation includes three real-world coding tasks: code insertion, code deletion, and multi-place code editing. Results demonstrate that PIE reduces computational overhead by over 85% compared to the standard full recomputation approach across all model sizes and tasks while well approximating the model performance."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"code generation",
"efficiency",
"large language model",
"code assistant"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/fbda7447b1e418f10c505a47c16e3c9bba170f50.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Let the Code LLM Edit Itself When You Edit the Code"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zrNbsV87Os | SSIF: Physics-Inspired Implicit Representations for Spatial-Spectral Image Super-Resolution | main | Active | Neural Implicit Function;Spatial-Spectral Super Resolution;Spectral Encoding | applications to computer vision, audio, language, and other modalities | 5;5;5;5 | 5;5;3;5 | 2;2;3;3 | 2;3;3;3 | 2;2;3;3 | 5 | 4.5 | 2.5 | 2.75 | 2.5 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.\tUsing the CAVE dataset as an example, CAVE contains 31 spectral bands, meaning it lacks ground truth (GT) data for cases where C > 31. Therefore, how was the PSNR for C > 31 in Figure 3 calculated? If the GT is obtained through interpolation, wouldn't the model's performance be limited to only approximating the performance of the interpolation?\n2.\tThe paper explains how \\(C\\) is selected, but how is the channel dimension \\(c\\) of the HR-MSI (with a shape of \\(H \\times W \\times c\\)) determined? In Figure 3, the spectral bands from 1 to 7 are considered out-of-distribution. Is this because \\(c\\) is chosen from the range of 8 to 31? Was the downsampling performed by simply truncating consecutive spectral bands, or was it uniformly sampled across the range? The lack of clarity in this explanation makes it unclear why \\(C_{\\min} = 8\\) was chosen instead of 1.\n3.\tFor the out-of-distribution spectra, I would prefer to see a visual representation of the results, but this was not included in the paper.\n4.\tSince \\(\\Lambda\\) can be specified, is it possible to achieve spectral compression, such as converting an HSI image into an RGB image?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This paper introduces a neural implicit model that expresses an image as a function of continuous pixel coordinates in the spatial domain and continuous wavelengths in the spectral domain, achieving simultaneous super-resolution in both dimensions—an approach that has not been explored before. The study demonstrates substantial innovation, is well-supported by theory, and includes extensive comparative and ablation experiments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a neural implicit model that expresses an image as a function of continuous pixel coordinates in the spatial domain and continuous wavelengths in the spectral domain, achieving simultaneous super-resolution in both dimensions—an approach that has not been explored before. The study demonstrates substantial innovation, is well-supported by theory, and includes extensive comparative and ablation experiments. In summary, this paper presents impressive innovations and demonstrates substantial effort. However, *Issue 1* raise some doubts about the reliability of the experimental results. Moreover, since the performance in spectral super-resolution is more appealing, the related experiments need improvement (such as Issue 3). If the concerns I raised are addressed, I would consider raising the score."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tUsing the CAVE dataset as an example, CAVE contains 31 spectral bands, meaning it lacks ground truth (GT) data for cases where C > 31. Therefore, how was the PSNR for C > 31 in Figure 3 calculated? If the GT is obtained through interpolation, wouldn't the model's performance be limited to only approximating the performance of the interpolation?\n2.\tThe paper explains how \\(C\\) is selected, but how is the channel dimension \\(c\\) of the HR-MSI (with a shape of \\(H \\times W \\times c\\)) determined? In Figure 3, the spectral bands from 1 to 7 are considered out-of-distribution. Is this because \\(c\\) is chosen from the range of 8 to 31? Was the downsampling performed by simply truncating consecutive spectral bands, or was it uniformly sampled across the range? The lack of clarity in this explanation makes it unclear why \\(C_{\\min} = 8\\) was chosen instead of 1.\n3.\tFor the out-of-distribution spectra, I would prefer to see a visual representation of the results, but this was not included in the paper.\n4.\tSince \\(\\Lambda\\) can be specified, is it possible to achieve spectral compression, such as converting an HSI image into an RGB image?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.Is the comparison algorithm from the most recent year?\n2.Channel analysis should be conducted under the guidance of the ground truth (GT); in other words, I would like to see the channels reshaped from C<31 to 31"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.The figures used to analyze the algorithm are very striking, as shown in Figure 1.\n2.Diverse methods were used to analyze the performance of each band in the images."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposed Spatial-Spectral Implicit Function (SSIF), which generalizes neural implicit representations to the spectral domain as a physics-inspired architecture by incorporating sensors’ physical principles of spectral imaging.This method is quite appealing, but there are still many issues that need to be resolved."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.Table 1 lacks a consistent number of decimal places; for example, 9.3, 13.2, and 27.3 have one decimal place, while the remaining values have two.\n2. The latest experiment only includes one comparison algorithm for 2023; a comparison algorithm for 2024 should be added.\n3.The authors conducted experiments on arbitrary channels with 𝐶>31 in CAVE, but the total number of channels in CAVE is only 31. My suggestion is to see if it is possible to recover the channel count from 𝐶<31 back to 31, and then perform comparative experiments.\n4.Figure 19 could benefit from the addition of an error plot to better illustrate the comparison results"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. The physical principles of spectral imaging in SSIF’s model design lack deep analysis of their physical property. It is unknown whether the learned neural network reflects physical properties.\n2. The author should provide some relevant implicit function-based methods for comparison.\n3. The paper seems like a combination of INR, SwinIR, and ciaoSR in HSI spatial-spectral tasks. Could you elaborate on any challenges faced when designing SSIF? This clarification would help in understanding the complexity and novelty of your methodology. Actually, as mentioned in the weaknesses, the explanation of certain blocks in this paper lacks soundness."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This article presents a relatively novel approach to HSI spatial-spectral super-resolution, attempting to achieve it from the perspective of continuous physical space.\n2. SSIF demonstrates good efficiency under low data conditions and converges more quickly during the training process."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes the Spatial-Spectral Implicit Function (SSIF), a physics-inspired neural implicit model that represents an image as a continuous function of both pixel coordinates in the spatial domain and wavelengths in the spectral domain. The authors validate SSIF on two challenging benchmarks, demonstrating its superior performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The description of the methods section in this paper is insufficient.\t\n2. This article's main innovation lies in combining the physical principles of spectral imaging with the spectral response functions of sensors to achieve HSI spatial-spectral super-resolution. Unfortunately, the authors do not provide sufficient explanation and analysis in the paper.\n3. The experimental section lacks the latest relevant methods."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1、How are the starting and ending points, $\\lambda_{i,s}$ and $\\lambda_{i,e}$, determined for each wavelength interval?\" \\\n2、How are the points within the wavelength interval selected? Does the choice between random and non-random selection significantly affect the results?\\\n3、To investigate how the selection of points affects the results, would it be possible to adopt a coarse-to-fine approach, similar to NeRF, to re-sample the points?\\\n4、Why is the variance in a Gaussian distribution expressed in this form $\\sigma_i = \\frac{\\lambda_{i,e} - \\lambda_{i,s}}{6}$?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The motivation of this study is articulated clearly, beginning with the physical imaging process of hyperspectral sensors to design spectral super-resolution. Experiments confirm the effectiveness of this design. Spatial super-resolution has already been widely adopted in previous work, and this paper combines both approaches to achieve dual-domain super-resolution."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper utilizes an implicit representation framework to address both spatial and spectral degradation in hyperspectral restoration tasks. Various structural variants are proposed based on different sampling strategies, achieving promising results in both in-distribution and out-of-distribution scenarios."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1、This paper has undergone multiple revisions over an extended period, but it still requires the addition of experiments of 2024. \\\n2、Implicit representations have become quite advanced in the task of spatial super-resolution, and the approach presented in this paper is relatively straightforward. However, this work lacks a certain level of innovation in the spatial dimension of super-resolution.\\\n3、This paper still contains typographical errors, such as in section A.5 'SSIF Model Variants,' where 'Both' on line 978 should be corrected to 'both.' Additionally, while the paper presents numerous experiments, some, such as those in section A.14, lack significance. For instance, the careful selection of only two points does not effectively demonstrate the model's superiority."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Spatial-Spectral Implicit Function"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024ssif,\ntitle={{SSIF}: Physics-Inspired Implicit Representations for Spatial-Spectral Image Super-Resolution},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zrNbsV87Os},\nnote={under review}\n}"
},
"abstract": {
"value": "Existing digital sensors capture images at fixed spatial and spectral resolutions (e.g., RGB, multispectral, and hyperspectral images), and generating super-resolution images with different resolution settings requires bespoke machine learning models. Spatial Implicit Functions (SIFs) partially overcome the spatial resolution challenge by representing an image in a spatial-resolution-independent way. However, they\nstill operate at fixed, pre-defined spectral resolutions. To address this challenge, we propose Spatial-Spectral Implicit Function (SSIF), a neural implicit model that represents an image as a function of both continuous pixel coordinates in the spatial domain and continuous wavelengths in the spectral domain. This continuous representation across spatial and spectral domains enables a single model to learn from a diverse set of resolution settings, which leads to better generalizability. This representation also allows the physical principle of spectral imaging and the spectral response functions of sensors to be easily incorporated during training and inference. Moreover, SSIF does not have the equal spectral wavelength interval requirement for both input and output images which leads to much better applicability. We empirically demonstrate the effectiveness of SSIF on two challenging spatial-spectral super-resolution benchmarks. We observe that SSIF consistently outperforms state-of-the-art baselines even when the baselines are allowed to train separate models at each spatial or spectral resolution. We show that SSIF generalizes well to both unseen spatial and spectral resolutions. Moreover, due to its physics-inspired design, SSIF performs significantly better at low data regime and converges faster during training compared with other strong neural implicit function-based baselines."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Neural Implicit Function",
"Spatial-Spectral Super Resolution",
"Spectral Encoding"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b93b185985f72c3cd14577051ca000a0d5611f28.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/4247aaef39cae4947c08ca74fdbcccf622dffc3f.zip"
},
"title": {
"value": "SSIF: Physics-Inspired Implicit Representations for Spatial-Spectral Image Super-Resolution"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zrdkQaf48Z | Leveraging Implicit Sentiments: Enhancing Reliability and Validity in Psychological Trait Evaluation of LLMs | main | Active | LLM;Benchmark;Evaluation;Psychometrics | alignment, fairness, safety, privacy, and societal considerations | 3;3;5;5 | 4;3;4;3 | 2;2;3;3 | 2;2;2;2 | 2;4;2;3 | 4 | 3.5 | 2.5 | 2 | 2.75 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "See numbered questions above: 1, 2, 3, 4"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Reasons to accept\n- The presentation of the method is clear and concise.\n- Measuring LLM’s sentiment tendencies is vital to identify biases and building fair systems.\n- The evaluation is performed both in English and Chinese with the approach being easily extended to other languages."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces Core Sentiment Inventory (CSI), a multilingual evaluation benchmark aimed at assessing the sentiment tendencies of LLMs in an implicit manner. The approach leverages 5,000 neutral words from English and Chinese and prompts the LLM to express polarity towards these neutral words. By assessing this polarity, the paper measures the biases of the models with respect to optimism or pessimism. To quantify the reliability of CSI, the paper validates the method against BFI, where CSI shows significant decrease in reluctance (i.e., model punting)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Reasons to reject\n\nWhile the paper has merit, I see the some critical flaws presented below:\n\n- The decision to pick all top nouns/verbs is questionable to me. Yes, nouns and verbs “tend” to be neutral. However, this is not always the case. From the examples in Table 2, some of these words are clearly polarized. “Improve” has positive connotations, as well as “team”. I believe there needs to be a manual filtering step where these words are removed to ensure reliable results. As it stands, I think the model does not have implicit biases if it assigns “improve” as positive.\n- Design choices are not well-motivated. The approach does multiple predictions for the same word and the method shuffles the order of words to measure inconsistency. (1) Given this goal, why is temperature T set to 0? Wouldn’t a higher temperature better indicate the model uncertainty in assigning tragedy/comedy? (2) Why is the number of words sampled equal to 30? What happens with n > 30 or n < 30. Why wasn’t the number of words picked so it maximizes the context window?\n- The prompt design is biased. (3) Why is neutral not a valid option? A very strong model with perfect instruction following capabilities will always pick one of the two (comedy/tragedy) and will never output “neutral”. (4) Given the definition of neutral score as N_{inconsistent} / N, I am wondering what percentage of words the model predicted in opposite categories? I think they should be very few. In this case, neutral score is solely determined by poor instruction following, not implicit biases."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "None"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "• Discuss how CSI can improve existing psychological scales designed for humans.\n\n• Further explore the unintentional biases that language models may have towards everyday concepts as mentioned in section 4.1.\n\n• On line 424, it might be: “e.g., five positive words, four neutral words and one negative words, and so on.”"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "• The CSI can effectively quantify the emotions of models, reflecting the emotional tendencies of LLMs.\n\n• It effectively reduces the issues of reluctance and inconsistency in traditional psychological scales.\n\n• The use of representative neutral words in constructing the CSI reduces the potential emotional orientation of the words themselves, better reflecting the internal emotional associations of LLMs.\n\n• It explores the impact of different language environments on the emotional tendencies of LLMs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel assessment method known as the Core Sentiment Inventory (CSI) for evaluating the emotional tendencies of large language models (LLMs). Inspired by the Implicit Association Test (IAT), the CSI aims to provide a more reliable and effective way to assess the implicit emotional characteristics of LLMs. It addresses the limitations of traditional psychometric methods, such as model reluctance and inconsistency."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "• The constructed CSI is only used to assess the emotions of LLMs and has not been extended to other psychometric fields, such as personality or moral decision-making.\n\n• It does not introduce the calculation methods for consistency rate and reluctance rate."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- did you consider using alternatives to \"comedy\" / \"tragedy\" (e.g. \"fun\", \"good\", \"dramatic\", \"bad\")?\n\n- did you consider LLMs with no significant guardrails (such as, if I remember correctly, the first Mistral)?\n\n- all models used in the paper are multilingual, have you considered prompting in cross-lingual setups (e.g. chinese prompt for english CSI) for additional reliability indications?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper proposes an elegant approach based on the adaptation of existing psychometric tools (IAT).\n\nThe paper is well written and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose Core Sentiment Inventory, a psychometric evaluation framework to assess LLMs' sentiment tendencies.\n\nThe experimental setup covers two languages, English and Chinese, 2 open-weight LLMs (LLama3.1-70B and Qwen2-72B), and 4 closed/proprietary LLMs (GPT-4o, two GPT4 checkpoints, and GPT-3.5).\n\nThe CSI consists of the 5k most frequent emotionally neutral words; it is assumed that noun and verbs are neutral, and thus the CSI word lists consist of the top-5k noun/verbs in the corpora.\n\nThe LLM is then provided with N words picked from the wordlist, and asked to to associate each word with \"comedy\" or \"tragedy\", thus revealing a sentiment bias for each word."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Although an enjoyable read, the work falls short when it comes to the experimental setup.\n\nFirst, while I consider the proposed method more elegant and effective than human-tailored alternatives such as BFI, I am not convinced by the preliminary reliability tests conducted: it can be argued than reluctance is due to post-training strategies (e.g. guardrails, instruction-tuning), thus a different choice of (accessible) LLMs could have been more convincing -- e.g. the first Mistral release.\n\nSecond, some design choices seem discretional and not thoroughly justified: for instance, the choice of the words \"comedy\" / \"tragedy\" used as classes seems arbitrary."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"Yes, Privacy, security and safety"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See above"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "CSI represents an interesting attempt to create a psychometric assessment tool tailored for LLMs, addressing concerns around LLM reluctance and consistency with human-designed psychometric scales.\n\nThe bilingual approach (English and Chinese) is a notable effort to capture linguistic and cultural variance in model behaviors, which is increasingly important as LLMs are deployed globally.\n\nThe experiments cover several dimensions of reliability and validity, with additional sentiment analyses through story generation, providing a range of quantitative metrics."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces the Core Sentiment Inventory (CSI), a new evaluation method inspired by the Implicit Association Test (IAT) to assess the implicit sentiment tendencies of large language models (LLMs). The approach aims to provide a reliable and valid measure of LLMs' optimism, pessimism, and neutrality in both English and Chinese, surpassing conventional human-centric psychometric tests like the Big Five Inventory (BFI). The authors present experimental results that claim improved reliability, reduced reluctance rates, and strong predictive power for CSI."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper does not sufficiently justify the underlying premise that implicit sentiment tests designed for humans (like IAT) can meaningfully assess non-human entities like LLMs. The model’s association of specific words with positive or negative sentiments may not translate into meaningful or actionable insights about its “psychological traits,” as LLMs lack actual consciousness or subjective experience.\n\nCSI is evaluated solely through internal metrics without external validation from human experts in psychometrics or linguistics. Given the novelty of the tool, expert evaluation is essential to substantiate the claims of reliability and practical value, particularly for a method positioned as a \"psychological trait\" assessment.\n\nThe word set of 5,000 items lacks diversity and cultural depth, and it is unclear how these words were chosen or if they were screened for cultural or contextual biases. This oversight introduces potential biases that could skew CSI’s predictive power and undermine its reliability across varied contexts.\n\nMany new mental health-based LLMs are not cited to show the differences anf effectiveness of this paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose the Core Sentiment Inventory (CSI), a novel tool inspired by the Implicit Association Test, to evaluate sentiment tendencies in Large Language Models"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024leveraging,\ntitle={Leveraging Implicit Sentiments: Enhancing Reliability and Validity in Psychological Trait Evaluation of {LLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zrdkQaf48Z},\nnote={under review}\n}"
},
"abstract": {
"value": "Recent advancements in Large Language Models (LLMs) have led to their increasing integration into human life. Understanding their inherent characteristics, such as personalities, temperaments, and emotions, is essential for responsible AI development. However, current psychometric evaluations of LLMs, often derived from human psychological assessments, encounter significant limitations in terms of reliability and validity. Test results reveal that models frequently refuse to provide anthropomorphic responses and exhibit inconsistent scores across various scenarios. Moreover, human-derived theories may not accurately predict model behavior in practical real-world applications.\nTo address these limitations, we propose Core Sentiment Inventory (CSI), a novel evaluation instrument inspired by the Implicit Association Test (IAT). CSI is built from the ground up with a significantly broader range of stimuli words than traditional assessments. CSI covers both English and Chinese to implicitly evaluate models’ sentiment tendencies, which allows for a much more comprehensive assessment.\nThrough extensive experiments, we demonstrate that CSI effectively quantifies models’ sentiments, revealing nuanced emotional patterns that vary significantly across languages and contexts. CSI significantly improves reliability, yielding more consistent results and a reduced reluctance rate, and enhances predictive power by effectively capturing models’ emotional tendencies. These findings validate CSI as a robust and insightful tool for evaluating the psychological traits of LLMs, offering a more reliable alternative to traditional methods."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"LLM",
"Benchmark",
"Evaluation",
"Psychometrics"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/cf2b56d8d594adc005041eecf1f0a94700e18b08.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/1b895ee3a170726823eefef05edeeb8257647eb8.zip"
},
"title": {
"value": "Leveraging Implicit Sentiments: Enhancing Reliability and Validity in Psychological Trait Evaluation of LLMs"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zs6bRl05g8 | Accelerating Block Coordinate Descent for LLM Finetuning via Landscape Correction | main | Active | Block coordinate descent;large language model finetuning | optimization | 3;3;5;6 | 3;4;3;3 | 2;3;2;3 | 2;2;2;3 | 1;3;3;3 | 4.25 | 3.25 | 2.5 | 2.25 | 2.5 | -0.555556 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. It is the first study to identify the issue in BCD, where frozen blocks that are far from optimality can interfere with the optimization of the active block.\n2. It theoretically demonstrates, through a regression problem on a three-layer shallow network, that suboptimal solutions may indeed arise.\n3. For LLM fine-tuning tasks, it shows improvements over existing baselines in both memory efficiency and performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Recently, a method called BAdam, which utilizes block coordinate descent (BCD), has gained attention in LLM fine-tuning for its memory efficiency. \nThis study reveals a drawback of BCD optimization in neural network training: the frozen blocks can narrow the optimization landscape, potentially misleading the training of the active block, resulting in suboptimal solutions. \nBased on this intuition, the authors propose the BREAD method, which corrects the loss landscape of BCD. Also, the authors experimentally demonstrate the superiority of BREAD over the existing baselines in terms of both memory-efficiency and performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I have several concerns.\n\n1. Although it is intuitive that BCD optimization can lead to suboptimal solutions in a regression problem using a 3-layer shallow network, the situation could differ for a general L-layer deep neural network. Moreover, since LLM fine-tuning often involves classification tasks rather than regression, it would be helpful to have a theoretical analysis or at least some intuition on how this approach would perform with loss functions like cross-entropy.\n\n---\n\n2. Convergence analysis is missing. For a landscape correction scheme like BREAD, there should be at least some convergence results, even in the context of convex optimization, to show whether it is a provably convergent algorithm. I think it is crucial in optimization literature.\n\n---\n\n3. The performance improvement in the main experiment of LLaMA-3 fine-tuning appears marginal. It would be beneficial to include more comprehensive experimental results across a variety of settings, such as with alternative architectures like Phi-3 or Mistral, and other benchmark tasks.\n\n---\n\n4. Additionally, according to Table 1, BREAD is slower than BAdam in terms of Epoch GPU hours. Therefore, for a clearer understanding of the BREAD method, it would be helpful to include a figure comparing learning curves in terms of wall-clock time rather than just the number of iterations, as shown in Figure 3."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed.",
"Yes, Privacy, security and safety"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See the above weaknesses for questions."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The proposed method is easy to follow and interesting, involving the addition of correction parameters during BCD to improve performance.\n2. The paper presents results on LLaMA models across various tasks to validate the effectiveness of the proposed method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose integrating BCD with landscape correction to address two issues in fine-tuning LLMs with BCD: 1. inefficient computation due to backpropagation through deeper layers and 2. limited exploration of the optimization landscape inherent to BCD. They provide theoretical insights based on a three-layer neural network and demonstrate empirical performance improvements, albeit with increased computational and memory requirements."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The algorithm lacks a theoretical convergence rate guarantee, although establishing such a rate for BCD optimization is challenging.\n2. In Appendix B.1 of the BAdam paper, various ordering strategies for block partitioning in BAdam are investigated; however, this paper neither provides any rationale nor presents similar results.\n3. It is surprising to observe that BREAD-SGD-partial demonstrates convergence similar to BREAD-SGD, and the reason for this behavior is unexplained.\n4. I believe there is an error in the example used in Proposition 2. Specifically, the expressions $Ce_1=W_3^{(1)}-y$ and $y-W_3e_1=y-W_3^{(1)}$ does not subtract to 0."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. **Algorithm 1**\n - Are there two separate instances of the Adam optimizer being used?\n - It is unclear how U and V are initialized and used.\n - What constitutes a single iteration? Is it defined as one while loop or one landscape correction update?\n - How is one epoch of training data sampled for Algorithm 1?\n2. **Notation**\n - Ensure that all notations used in Section 2 are properly introduced.\n - What does $n$ represent in Section 4.2?\n3. **Experimental settings**\n - How is alpha chosen for LoRA because rank 80 is a pretty weird setting.\n - The evaluation setup is not fully aligned with MathInstruct [2] regarding the choice of out-of-domain datasets and the number of shots used.\n4. **Baselines**\n - How does the proposed method compare with LOMO?\n - The authors should include the performance of other variants for a more comprehensive evaluation.\n5. **Ablation studies**. The authors should include ablation studies that explore the effect of the choice of D, K, and r on performance.\n6. **Statistical significance**. The authors should report standard deviations across multiple random seeds to assess the robustness of the results.\n- Minor:\n - In the proof of Proposition 2, $C = [y - W_3^{(1)}, 0, \\ldots, 0]$ should be corrected.\n - Note that mathematical fine-tuning also falls under instruction tuning.\n - In Table 2 (Llama 3.1-8B, SimulEQ, 4-shot), the best result is incorrectly highlighted.\n\n[1] Lv, Kai, et al. \"Adalomo: Low-memory optimization with adaptive learning rate.\" *arXiv preprint arXiv:2310.10195* (2023).\n\n[2] Yue, Xiang, et al. \"Mammoth: Building math generalist models through hybrid instruction tuning.\" *arXiv preprint arXiv:2309.05653* (2023)."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The two issues of BCD are particularly intriguing and their demonstration via 3-layer NN is interesting.\n- The experimental results look promising."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents BREAD, a method designed to overcome limitations of block coordinate descent (BCD) in LLM fine-tuning by introducing landscape correction. The authors claim that BREAD unfreezes and updates inactive blocks using a lightweight optimization approach, addressing inefficiencies in derivative usage and suboptimal training landscapes. The method is evaluated through experiments on 8B and 70B LLMs, demonstrating memory savings and competitive downstream performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- **Presentation**. The paper’s presentation could be improved, particularly in terms of notations and the clarity of the method descriptions.\n- **Limited technical contributions**. The technical novelty appears limited, as BREAD is essentially a combination of LoRA and BCD while BREAD-SGD is a combination of LOMO [1] and BCD.\n- **Experiments**. Additional experiments, as outlined below, are necessary to strengthen the findings."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see Weakness."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper introduces an innovative combination of block coordinate descent (BCD) with a novel \"landscape correction\" approach, termed **BREAD**, tailored for large language model (LLM) finetuning. The originality stems from identifying limitations in the traditional BCD approach blending of BCD with lightweight, low-rank updates.\nEmpirical results from experiments on models like Llama 3.1-8B and Llama 3.1-70B demonstrate the method's effectiveness.\nThe paper is structured well."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a method designed to improve the efficiency and convergence speed of block coordinate descent (BCD) in large language model (LLM) finetuning. The authors identify two main limitations of traditional BCD when applied to deep neural networks: (1) the ineffective use of computed gradients for inactive blocks during backpropagation, leading to wasted computational resources, and (2) the suboptimal optimization landscape created by freezing most blocks, which can hinder convergence.\nTo address these challenges, BREAD integrates a lightweight landscape correction mechanism that updates inactive blocks using low-rank matrix adjustments. So the method becomes SGD/ADAM on all layers but for inactive layers the LORA structure is constructed for SGD/ADAM to optimize."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. In Case II of the proof for Proposition 1, it is unclear why \\( z_3^* \\) necessarily has at least one negative entry. The reasoning behind this is not evident.\n\n2. The proposition does not adequately illustrate the global optimization landscape when applying BCD to a DNN. A single step in BCD resulting in a higher loss does not imply eventual failure. Therefore, the conclusion stating, 'our analysis reveals that the sub-problem of BCD potentially excludes parts of the optimization landscape that provide search directions toward the optimal solution,' is not clearly justified.\n\n3. The described method involves performing SGD on all inactive layers for LoRA and also on active layers, making it more similar to Galore than true BCD. It is unclear how this approach achieves lower memory cost compared to Galore.\n\n4. Demonstrating an extreme example in the propositions does not conclusively show that BCD performs worse than other methods in general. Experiments should be conducted to test the hypothesis that BCD restricts the search space. This could be verified by measuring the change in magnitude for each layer and comparing the results between BCD and BREAD to substantiate that BREAD indeed expands the search space more effectively."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024accelerating,\ntitle={Accelerating Block Coordinate Descent for {LLM} Finetuning via Landscape Correction},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zs6bRl05g8},\nnote={under review}\n}"
},
"abstract": {
"value": "Training and finetuning large language models (LLMs) are resource-intensive tasks, with memory limitations being a key bottleneck. A classic optimization method, block coordinate descent (BCD), offers solutions by segmenting the trainable parameters into multiple blocks and optimizing one active block at a time while freezing the others, thereby significantly reducing memory cost. However, we identify that blindly applying BCD to train LLMs can be inefficient for two reasons. First, optimizing only the active block requires backpropagating through multiple deeper yet inactive blocks, resulting in wasteful computations. Second, the frozen blocks, when they are not quite close to optimality, can narrow the optimization landscape, potentially misguiding the training of the active block. To address these issues simultaneously, we propose integrating BCD with *landscape correction*, which unfreezes the inactive blocks and updates them in a cost-efficient manner during the same backpropagation as the update to the active block. We show that our method empirically improves vanilla BCD with minimal additional computation and memory. Experiments on 8B and 70B models demonstrate that our proposed method surpasses memory efficient baselines and matches Adam's downstream performance while reducing memory cost by 80% compared to Adam."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Block coordinate descent",
"large language model finetuning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/5be83a85abc351a5acd8269667f8fae12afee568.pdf"
},
"presentation": null,
"primary_area": {
"value": "optimization"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/0fdec2b8d9d6f5bcf9875f931d922895bce0f8dc.zip"
},
"title": {
"value": "Accelerating Block Coordinate Descent for LLM Finetuning via Landscape Correction"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zsVZCiYG2r | ChatSR: Conversational Symbolic Regression | main | Active | Symbolic Regression;Multi-modal Large Language Models;Scientific discovery | foundation or frontier models, including LLMs | 3;3;3;6 | 3;4;3;3 | 2;2;2;3 | 2;1;2;3 | 2;2;2;2 | 3.75 | 3.25 | 2.25 | 2 | 2 | -0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Suggestions/questions:\n+ Please use \\citep instead of \\cite for better readability\n+ The abbreviation such as \"MSDB\" in the related work is not so friendly for readers who are not familiar with the representative methods in this field.\n+ Line 201: [X, Y]->[X, y]?\n+ There is a missing space before \"by\" in Line 215\n+ Please briefly the relationship between Vicuna (Line 337) and LLaVA (Line 383) to improve readability. \n+ Please list the scale of the pretraining/post-training data used by other methods in the experiment section."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "+ The authors present a well-designed method to construct large-scale, high-quality synthesized QA data. Human-designed constraints are applied to validate candidate expressions. \n+ The method is evaluated on 13 datasets, and the experimental results demonstrate the effectiveness of the proposed method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a symbolic regression method based on multimodal LLMs. Compared with previous methods, the method allows users to enter their prior/assumptions in natural language for expressions, which removes the need for adding constraints to the search process in RL methods or code changes."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "+ The types of symbols considered are relatively limited (Line 212). Please elaborate on the comparisons of the types of symbols explored in this work and previous work. I'm also wondering if a more fine-grained, detailed description of the properties of expression may be helpful to increase the diversity of the instructions in data.\n+ The writing would benefit from another round of proofreading and more details. Please check the suggestions in the question section."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Is the conversation a multi-turn dialog? In Line 356, the authors mention collecting multi-turn QA pairs as the dataset. However, there is no multi-turn example in Figure 3. Additionally, please provide statistics of the dataset (e.g., average number of turns and diversity of the questions).\n\n- If the goal of the approach is to improve the ability to conform to constraints in the input question, it would be better to provide experimental results on how well ChatSR meets the provided constraints.\n\n- This work has significant overlap with constrained generation. For example, previous work in constrained generation proposed generation algorithms to ensure adherence to constraints such as including specific keywords in summaries or limiting completion length. It would be beneficial to add related work in this area.\n\n- Please use proper citation commands. The authors only use \\citet command which makes it difficult for readers to follow the main content."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The comprehensive background explanations of symbolic regression tasks make the paper accessible and easy to understand\n\n- The high-level figures provide clear visual aids that effectively guide readers through the methodology and concepts"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposes ChatSR, a conversational symbolic regression approach that enables finding mathematical expressions from observed data while following natural language instructions and constraints. The main contribution is its ability to understand and incorporate specific constraints (like symmetry, periodicity, or inclusion of certain mathematical symbols) through natural language instructions, making it more accessible to non-computer science users. The method leverages a multimodal large language model architecture that combines SetTransformer for data feature extraction with a language model for expression generation, trained on 15M synthetic question-answer pairs of data and language instructions. Experimental results show that ChatSR outperforms state-of-the-art baselines in both expression accuracy (R² score) and expression simplicity (lower node count), while also demonstrating strong zero-shot capabilities for properties not seen during training."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Hard to understand where the performance gain came from:\nAs far as I understood, the proposed approach is to generate a dataset consisting of QA pairs where questions contain some constraints and train a model with the dataset. However, the datasets used in Table 1 do not require the ability to follow natural language instructions. Despite the performance gain, it seems that the effectiveness of their approach is not fully supported by these results (Table 1).\n\nUnreliable results of the baseline:\nThe results of MMST in Table 1 are extremely low compared to the results in the original paper. For example, the reported R² score of MMSR was 0.9937±0.004, but in Table 1, the score is 0.9037±0.004. If the results are reproduced by the authors, they should carefully check the implementation of MMSR to provide a reliable comparison. Besides, the performance of NeSymRes is identical to the reported performance in the MMSR paper and it seems that the authors did not implement NeSymRes by themselves but used the reported scores.\n\nLack of analysis on the quality of the synthesized data:\nWhile I agree with the direction of this work which aims to train multi-modal language models to follow instructions, more in-depth analyses are required to better understand the quality of the generated data and the effectiveness of the generation pipeline. For example, the authors might analyze the complexity of the constraints in the input questions.\n\nThe analysis (Table 2) is quite trivial:\nProviding prior knowledge will naturally act as a hint for the regression of the solution function. In Line 480, the authors provide an example of prior knowledge: \"For example, for Nguyen-5 sin(x²)cos(x) − 1, we'll ask it to generate an expression that contains the symbols sin and cos\". However, it seems very trivial that a model would perform better with more information about the solution."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "(1) In Section 1, this paper defines $X \\in R^{n \\times d}$ and $y \\in R^n$. Under this definition, $\\{X,y\\}\\in R^{n \\times (d + 1)}$. However, in Section 3.4.1, $D=\\{X,y\\}\\in R^{n \\times d}$ is used. Please clarify. \n\n(2) Figure 2 in this paper bears a strong resemblance to Figure 1 in [1].\n\n(3) In Section 3.4, this paper states that Vicuna is used as the LLM, but in the last paragraph of Section 3.5, LLaVA is mentioned instead. Please clarify which LLM was used in your experiments and provide more details.\n\n(4) The definition of $X_{instruct}^t$ is ambiguous when $t = 1$ in equation (1). This leads to confusion, as two $x_D$ terms are included in equation (2).\n\n(5) In equation (2), $X_{instruct,<i}$ should likely be $X_{instruct}$. Please verify.\n\n(6) The MMSR results reported in Table 1 of this paper for expression complexity (Nodes) are consistent with those in [2], but the $R^2$ results are significantly lower. Please clarify. \n\n(7) While Section 2 contains numerous references, but no citations are present in Section 1. What is the rationale for this omission?\n\n(8) Does the training data actually match what is shown in Figure 3? (The examples contain multiple grammatical errors)\n\n(9) Section 3.5 mentions the use of multi-turn dialogue data, but the examples in Figure 2 show no clear relationship between the turns. Why not use single-turn data for training?\n\n(10) Table 1 reports an Average $R^2$ of 0.9820 for MMSR [2], while MMSR [2] reports 0.9934, significantly outperforming ChatSR. Given that the datasets and evaluation metrics should be consistent, what accounts for this discrepancy?\n\n\nReferences:\n\n[1] Kazem Meidani, Parshin Shojaee, Chandan K Reddy, and Amir Barati Farimani. Snip: Bridging math\n\nematical symbolic and numeric realms with unified pre-training. The Twelfth International Conference on Learning Representations, 2024.\n\n[2] Yanjie Li, Jingyi Liu, Min Wu, Lina Yu, Weijun Li, Xin Ning, Wenqiang Li, Meilan Hao, Yusong Deng, \n\nand Shu Wei. Mmsr: Symbolic regression is a multi-modal information fusion task. Information Fusion, 2024."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Users can specify requirements in natural language without needing code modifications, lowering the barrier to entry for symbolic regression.\n\n2. The LLM interprets prior knowledge from natural language instructions, generating higher-quality expressions as a result.\n\n3. Integrating MLLMs for symbolic regression (SR) is a novel approach, creating new opportunities for user-friendly applications.\n\n4. The model’s capability to generate expressions for previously unseen properties highlights the potential of LLMs in this domain."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces ChatSR, a symbolic regression method leveraging multimodal large language models (MLLMs) to enable conversational interactions. Unlike traditional approaches that rely solely on observed data, ChatSR allows users to describe desired properties of the target expressions in natural language, such as periodicity or specific symbols like \"sin.\" Through a feature extractor and projection layer, the method maps data features to word features, facilitating this natural language input. ChatSR achieves state-of-the-art performance across various datasets and exhibits strong zero-shot capabilities, generating valid expressions even for requirements unseen during training."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper lacks a strong motivation. It seems to primarily revise existing Transformer-based symbolic regression (SR) methods by simply replacing the decoder layers with decoder-only LLM layers.\n\n2. Although incorporating prior knowledge enhances performance, it restricts generalization. The model’s success relies heavily on the presence of prior knowledge in the prompts.\n\n3. Training the model—especially due to the feature alignment process and use of LLMs—requires substantial computational resources, a limitation that is insufficiently addressed in the paper.\n\n4. The writing quality is poor and requires extensive proofreading to improve clarity and readability. There are numerous grammatical errors (notably in Section 2.2) and citation formatting issues (e.g., “RSRMXu et al.,” and “Double Q learningHasselt (2010)”).\n\n5. The paper’s structure is confusing, with frequent grammatical errors, improper citation formatting, and some unclear figures and experimental results, making it challenging to understand the authors' intended message."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "see weakness"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "ChatSR presents a novel approach to symbolic regression by integrating natural language understanding with expression generation. This allows for intuitive user interaction and incorporation of domain knowledge.\nThe method performs strongly, outperforming state-of-the-art baselines in fitting accuracy. Its ability to handle complex requirements and exhibit zero-shot learning is particularly impressive.\nThis work opens up new scientific discovery and data analysis possibilities by making symbolic regression more accessible and flexible."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces ChatSR, a conversational symbolic regression method based on MLLM. The key innovation is the ability to generate mathematical expressions that fit observed data while meeting specific requirements described in natural language. The method combines a SetTransformer for data feature extraction, an LLM for understanding natural language instructions and generating expressions, and a projection layer to align data and text features. ChatSR demonstrates state-of-the-art performance in fitting accuracy and exhibits zero-shot capabilities in understanding and applying previously unseen requirements."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The introduction lacks proper citations, which are crucial for contextualizing the work and acknowledging prior research\nProper use of \\citep and \\citet commands with appropriate spacing is needed throughout the paper.\nThe related work section is excessively long, potentially overshadowing the paper's own contributions.\nThe table captions are positioned below the tables, which doesn't adhere to the ICLR template requirements.\nFigure 4 lacks clarity in its legend, text, and content. Improving the visual presentation would enhance the reader's understanding of the results.\nThe paper provides a limited analysis of the experimental results.\nIn Figure 1, \"Larger Language Model\" should likely be \"Large Language Model.\"\nThe paper lacks discussion on computational requirements and other important aspects of the experimental setup. Including this information would provide valuable context for the method's practical applicability and reproducibility."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A large language model for symbolic regression that can fit the data by generating desired expressions based on natural language prompts."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024chatsr,\ntitle={Chat{SR}: Conversational Symbolic Regression},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zsVZCiYG2r},\nnote={under review}\n}"
},
"abstract": {
"value": "Formulas are the language of communication between humans and nature. It is an important research topic of artificial intelligence to find expressions from observed data to reflect the relationship between each variable in the data, which is called a symbolic regression problem. The existing symbolic regression methods directly generate expressions according to the given observation data, but we cannot require the algorithm to generate expressions that meet specific requirements according to the known prior knowledge. For example, the expression needs to contain the symbol `$\\sin$' or be periodicity, and so on. Even if it can, it often requires very complex operations, which is very inconvenient. In this paper, based on multi-modal large language models, we propose ChatSR, a conversational symbolic regression method that can generate expressions that meet the requirements simply by describing the requirements with natural language instructions. By experimenting on the test datasets, we can demonstrate that ChatSR leads the state-of-the-art baselines in fitting performance. More notably, ChatSR can well understand the prior knowledge contained in natural language prompts, and can further improve the quality of generated expressions according to the prior knowledge. In addition, it is exciting that ChatSR has good zero-shot capability."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Symbolic Regression",
"Multi-modal Large Language Models",
"Scientific discovery"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/ebb95ef17f5429c20f64f1019e0f36e69ce01aeb.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/48422ba4eea42e0af706a756953479124a6be05b.pdf"
},
"title": {
"value": "ChatSR: Conversational Symbolic Regression"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ztT70ubhsc | KnobGen: Controlling the Sophistication of Artwork in Sketch-Based Diffusion Models | main | Active | Computer Vision;Image Generation;Text-to-Image Generation;Conditional Image Generation;Diffusion Models | applications to computer vision, audio, language, and other modalities | 1;5;5;5 | 5;3;5;5 | 2;2;2;2 | 2;2;2;2 | 1;3;2;2 | 4 | 4.5 | 2 | 2 | 2 | -0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weaknesses"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "[+] The paper is easy to read, and generally well written.\n\n[+] The work addresses the research gap of allowing user control the influence of sketch and text on the generated image useful for end-users ranging from novice to professionals."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes \"KnobGen\", a novel algorithm for sketch+text-based image generation and provides the user control over the balance between fine-grained and coarse-grained queries coming from the input sketch and textual prompt respectively. It proposes a dual-pathway framework that democratises sketch+text-based image generation by adapting to varying levels of sketch abstraction and user-skills. It proposes a Coarse-Grained Controller (CGC) block for high-level semantics and a Fine-Grained Controller (FGC) block for further refinement of fine-grained features in the final output. With the possibility of controlling the relative strengths of these modules, this method can control how fine or coarse grained the final output will be. Finally, it also proposes a new sketch dataset on top of the MultiGen-20M dataset. Experimental results shows this method to outperform other popular methods like ControlNet, T2I-Adapter, etc."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "[-] The paper includes a qualitative and quantitative comparison with several SoTA methods. However, a thorough quantitative ablation study examining the impact of each component would greatly enhance the reader's understanding of their relative importance. There exists an ablative study but it appears to be superficial and does not justify all design choices made in the paper.\n\n[-] The quantitative results provided in the work lacks critical information regarding the value of knob used -- necessary to gauge the improvement of the proposed model. It would have been great to see the changes quantitatively on how the actual numeric values of the knob affecting the final output metrics.\n\n[-] Generally the contribution factor of both micro and macro pathways should sum to 1. However, in this paper, the knob during training increases the contribution of micro pathway in each epoch, without decreasing the macro pathway. Won't doing this cause issues in the overall magnitude of the feature map (as one pathway is getting more weightage than the other)?\n\n[-] Any particular reasoning behind using a \"tanh\" instead of any other function in knob during training for increasing the contribution of micro pathway? It would have been great to see ablation in this aspect as well, to fully justify this design choice.\n\n[-] Other SoTAs like ControlNet also provides a balancing factor to control sketch-conformity. The proposed method should be compared in this aspect as well to check how granular the control is compared to SoTAs.\n\n[-] It would be very interesting to see \"sketch-only\" generation results, either by passing null-prompt or making the text-pathway weighting factor to zero."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please refer to weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The proposed method works for input sketches with varying abstraction levels, ranging from amateur and professional sketches. This inclusiveness is meaningful in practice as it can expand the scope of end-users with different needs and drawing skills.\n- The design of the modulator during model training is interesting. The goal is to harmonise the dual-path conditions, thus achieving more enjoyable results. The intuition to let the coarse-grained cues dominate at the early denoising steps and gradually increase the impact of fine-grained cues is sound.\n- The degree of alignment between the input sketches and the generated images is controllable during the inference. Such flexibility is desirable for boosting the applicability of sketch-guided text-to-image generation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposed a sketch-based image generator that accepts sketches with different complexities from novice users and professionals. And reasonable outcomes (i.e., high-quality images) can be obtained in either case. At its centre is the proposed dual-path way conditional diffusion model, which can handle the coarse-grained and fine-grained conditions separately and simultaneously. Experimental results validated the flexibility and effectiveness of the proposed method for sketch-to-image generation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The professional sketches (Multi-Gen-20M) considered in this work are in binarised versions of HED edges, which is very different from what a real artist would draw (no artist or professional sketcher would produce lines like those in Figure 1). This makes the basic assumptions/conditions of the paper not very rigorous, somewhat deviating from the ambitious objectives, i.e., dealing with pro-sketch and any other complexity levels with a unified model. \n- The modulator is heuristically designed. It is hard to justify if there is a scalability issue that might need tedious hyperparameter tuning for diverse training data.\n- The effectiveness and applicability of the knob mechanism is questionable. \n - From Figure 6, the effect does not seem very pronounced: in the volcano example, the volcano corresponding to the intermediate gamma value appears to match the details of the input sketch better; in Keith's example (the second row from the bottom), the changes in facial details are also not noticeable. \n - Besides, the user has to try different knob values until satisfaction (and this may be pretty different for diverse input sketches) since it has no apparent relation to the user's need for the complexity level from the input sketches.\n - The impact of fine-grained cues is hard to manage precisely, as they have been injected into the model at early denoising steps, and the effect will last in the following denoising steps.\n- The current competitors in experiments are not designed for sketches. It would be great if some sketch-guided image generation works, e.g., [a], could be compared and discussed.\n- There is a “second evaluation set” with 100 hand-drawn images created by novice users used for experiments. It would be great to show these sketch images for completion.\n\n[a] Sketch-Guided Text-to-Image Diffusion Models, SIGGRAPH 2023"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "The writing is not good, since there are many sentences we can not understand."
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The authors provide visually appealing figures that make the content engaging and easy to understand.\n2. Comprehensive experiments demonstrate the effectiveness of the proposed method, showcasing its potential in real-world applications."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors introduce an new dual-pathway framework that aims to overcome the shortcomings of existing sketch-based diffusion models. By considering both fine-grained and coarse-grained features, this approach elegantly combines high-level semantic understanding with low-level visual details. The proposed method not only enhances the connection between user input and model robustness but also paves the way for more effective results in sketch-based tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1, I can not follow this paper. The writing is bad, since there are many sentences I can not understand. In introduction, the presentation is not good. (1) ‘Furthermore, we observed that the quality and alignment of the generated images with the input sketch are highly sensitive to the weighting parameter that governs the model’s dependence on the condition, Figure 2. (2) 'Additionally, the removal of the text-based conditioning in DM makes these models ignore the semantic power provided by text in diffusion models trained on large-scale image-text pairs, Additionally,...'. There are two ‘Additionally’ , which is not enjoyable. (3) 'Professional-oriented models like ControlNet and T2I-Adapter are designed to handle only artistic-grade sketches Fig. 3.a, while amateur-oriented approaches Koley et al. (2024), cater to novice sketches without text guidance Fig. 3.b'. In some places it is Figure , while some ones are Fig.. (4) 'The Macro Pathway extracts the high-level visual and language semantics from the sketch image and the text prompt using CLIP encoders and injects them into the DM via our proposed Coarse-Grained Controller (CGC)'. too much ‘ and’ \n\n2. Some figure captions lack clarity; for instance, in Figure 2, the term “weights” could use more explanation.\n\n3. There are concerns that the overall writing style may resemble that produced by a language model, suggesting a need for a more personal touch."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the questions in the “Weaknesses” part."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.\tKnobGen introduces a dual-pathway framework that balances the effects of both coarse-grained and fine-grained features, resulting in better performance compared to other sketch-based image generation methods. \n\n2.\tThe authors introduce a dynamic modulator to regulate the influence of the CGC and FGC modules during training, thereby preventing premature overfitting to fine details."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors propose a sketch-based image generation method which can generate high-quality images with varying levels of sketch complexity, named KnobGen. It designs a coarse-grained controller module and a fine-grained controller module to learn high-level semantics and detailed refinement separately. Then the authors introduce a knob inference mechanism to align with the user’s specific needs. The experiments have shown superior performance compared to other sketch-based images generation methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Although the proposed method balances the effects of coarse-grained and fine-grained features, it may not meet users' expectations when the input sketches are highly abstract. For example, as depicted in the last column of the second and third rows in Figure 4, the generated images align with the distortions shown in the sketches. Although this effect can be adjusted by the hyperparameter γ, changing the “controllable scale” within the ControlNet and T2I-adapter can also yield similar outcomes. To be specific, the authors are recommended to provide additional results based on varying the 'controllable scale' of ControlNet and T2I-adapter. This would illustrate that the proposed method can meet user needs without the need for adjusting γ. \n\n2. Setting a hyperparameter γ to control the conditional signal based on the denoising steps during inference is a common operation in T2I task, which is not a novel technical contribution. The authors are suggested to demonstrate the differences between this method and previous ones. For instance, they could illustrate the relationship between the tanh-based modulator used in training and the γutilized during inference.\n\n3. There are still some experimental issues:\n\n(1). The evaluation of the generated results is somewhat subjective. Additional user studies could further demonstrate the strengths of the proposed method. Specifically, the authors can prepare more than 50 sets of sketches, which need to be evenly distributed across different levels of complexity. Users can then rank the results generated by the proposed method and other sketch-based methods.\n\n(2). Since these models that assess user preferences are trained on their own designed datasets, a single aesthetic score is insufficient. The authors are suggested to conduct quantitative experiments using the HPS-v2[A] and Pick score[B] metrics, which are widely used for comprehensively assessing the generation quality.\n\n[A]. Wu X, Hao Y, Sun K, et al. Human preference score v2: A solid benchmark for evaluating human preferences of text-to-image synthesis[J]. arXiv preprint arXiv:2306.09341, 2023.\n\n[B]. Kirstain Y, Polyak A, Singer U, et al. Pick-a-pic: An open dataset of user preferences for text-to-image generation[J]. Advances in Neural Information Processing Systems, 2023, 36: 36652-36663.\n\n(3). In the ablation study, there are too few examples, as the displayed images only include results of human portrait generation. The authors are recommended to add results of sketch generation for other categories. And the authors are recommended to provide more quantitative ablation results, with the same metrics as the quantitative experiments to demonstrate the effectiveness of the proposed method."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "KnobGen combines coarse- and fine-grained controls for flexible text-to-image generation, adapting to both amateur and professional sketches."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024knobgen,\ntitle={KnobGen: Controlling the Sophistication of Artwork in Sketch-Based Diffusion Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ztT70ubhsc},\nnote={under review}\n}"
},
"abstract": {
"value": "Recent advances in diffusion models have significantly improved text-to-image (T2I) generation, but they often struggle to balance fine-grained precision with high-level control. Methods like ControlNet and T2I-Adapter excel at following sketches by seasoned artists but tend to be overly rigid, replicating unintentional flaws in sketches from novice users. Meanwhile, coarse-grained methods, such as sketch-based abstraction frameworks, offer more accessible input handling but lack the precise control needed for detailed, professional use. To address these limitations, we propose KnobGen, a dual-pathway framework that democratizes sketch-based image generation by seamlessly adapting to varying levels of sketch complexity and user skill. KnobGen uses a Coarse-Grained Controller (CGC) module for high-level semantics and a Fine-Grained Controller (FGC) module for detailed refinement. The relative strength of these two modules can be adjusted through our knob inference mechanism to align with the user's specific needs. These mechanisms ensure that KnobGen can flexibly generate images from both novice sketches and those drawn by seasoned artists. This maintains control over the final output while preserving the natural appearance of the image, as evidenced on the MultiGen-20M dataset and a newly collected sketch dataset."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Computer Vision",
"Image Generation",
"Text-to-Image Generation",
"Conditional Image Generation",
"Diffusion Models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/f7e5f3d6adf9f79dfca650dbd2780317d1a6be26.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/92c11304029f0c2e07dc6a3204c18fd0419b2c12.zip"
},
"title": {
"value": "KnobGen: Controlling the Sophistication of Artwork in Sketch-Based Diffusion Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ztzZDzgfrh | ReDeEP: Detecting Hallucination in Retrieval-Augmented Generation via Mechanistic Interpretability | main | Active | Retrieval-Augmented Generation Hallucination;Hallucination Detection;Mechanistic Interpretability | alignment, fairness, safety, privacy, and societal considerations | 5;6;8 | 4;3;4 | 3;3;4 | 3;3;4 | 2;3;3 | 6.333333 | 3.666667 | 3.333333 | 3.333333 | 2.666667 | 0.188982 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "I have no ethics concerns."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Why LLama2-7B (smaller and older version than others) has better results on Dolly in terms of F1 or Accuracy in Table 1?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Authors provide a straightforward method to detect hallucinations in RAGs that does not require model fine-tuning.\n- Empirical results provided by the authors look good."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a method for detecting hallucinations of Retrieval Augmented Generation (RAG) models in the scenario when retrieved context is accurate and relevant. \n\nThe authors hypothesize that hallucinations are caused by models ignoring the retrieved context and overemphasizing their parametric knowledge. To capture these concepts they introduce two auxilary scores: External Context Score (ECS) that reflects utilization of the retrieved context by the model, and Parametric Knowledge Score (PKS) that reflects utilization of the parametric knowledge. Hallucinations are then predicted by thresholding a hallucination score H which is computed as a weighted sum of ECS and PKS.\n\nIn addition to that, the authors propose a method to reduce hallucinations by suppressing outputs of attention heads that contribute to low ECS and outputs of fully-connected layers that contribute to high PKS."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "## Lack of justification for PKS and ECS\n\n### No PKS justification\nAlthough PKS is correlated with a hallucination label (line 319) there is still no guarantee that it is adding parametric knowledge. Since you do not provide any theoretical justification for this score, at least an empirical justification is needed. You can run a simple experiment: use LogitLensed outputs before FFN as final outputs and check whether it removes the parametric knowledge bias using some of the setups for that, for example, the one from [1] (they study it through the prism of the phenomenon you encounter in RQ3 and Appendix E).\n\n### Questionable ECS justification\nContrary to the PKS the authors provided empirical justification for the ECS measuring model reliance on context, however, I find it not very convincing so far.\n\nFirst of all, I do not see how the ratio of attention head attending vs mis-attending justifies ECS. It would make more sense to me if you provided such a ratio for mulitple different values of ECS and observed that the higher the ECS the more often a model attends.\n\nSecondly, I am not sure that ratio of attending is computed correctly. As far as I understood for LLama-7B you take hallucinated response (which means that it contradicts external context) and the most attended span in external context. Then you ask gpt-4o to evaluate whether this span supports existence of a conflict in response or not. If that is the case, I do not understand why this experiment shows whether the model attends (the attention span contains part of the context needed for the correct answer) or mis-attends. If attention span supports the existence of a conflict in response it might still not be relevant for the correct response itself, which means a conflict exists but we can not call it a hallucination according to your definition (hallucination = response is contradicting the context or is not supported by it - line 72).\n\nPlease correct me if I misunderstood the experiment setting, what is meant by attending, or the way attending and mis-attending is computed.\n\n## Too many hyperparameters\nI am afraid that the proposed hallucination detection method is not applicable in practice as it requires a lot of manual hyperparameter tuning. According to the provided values, they all are different per dataset and model (see Appendix I). They include:\n\n- top k % for ECS \n- top k % for PKS\n- tau threshold for H - page 8 bottom\n- alpha and beta for reweighting page 9 top\n- chunk size for the chunked version of REDEEP\n\nI suggest that the authors discuss strategies for automating hyperparameter selection or provide guidelines for choosing these parameters in real-world applications.\n\n## Insufficient experiments\n\n### Hallucination detection experiment\n- For RagTruth dataset there exist baselines provided by the original paper [2] which perform better than all the baselines considered by you, could you please include them? E.g. Baseline LLama2-13B results fine-tuned on RagTruth have 78.7 F1, see Table 5 in [2] vs yours 78.3 in Table 1. I think the comparison makes a lot of sense since you tune many hyperparams using RagTruth validation dataset while you could simply fine-tune that baseline on the same data instead.\n- Same comes for Dolly dataset, please include results for AlignScore and RepC-LE-nn-n2000-e2000 that have 84 and 86 accuracy correspondigly, while the best method provided by you scored 73.73 (LLama2-7B).\n- Please also provide results for the Noisy Context split from Dolly [3] dataset because it better approximates realistic RAG application scenario. \n\n### Causal experiment\n\n- First of all, I don’t see how a higher NLL difference for the experimental group than for the control group shows a causal relation between hallucinations occurrence and copying heads neglecting necessary knowledge, could you please elaborate?\n- The experiment results are very noisy and it is hard to draw any conclusions from them, for example, boxplot of the experimental group is fully contained within the boxplot of the control group in Figure 5 (b). \n- It is not clear how many heads are within experimental and control groups, it can be the case that loss changes are bigger for the experimental group simply because it intervenes in more heads.\n\n### Hallucination generation experiment\n\nPrompt for truthfulness (Appendix L) creates bias, since GPT-4o knows which answer belongs to the baseline and which to AARF. It can influence its answers since usually in scientific papers named methods outperform baselines, which must have been the case on chatgpt training data as well and possibly created such a bias. \n\nInstead, it would be nice to see the results for prompts that contain anonymous names (e.g. model 1 and model 2 instead of baseline and AARF) to avoid the mentioned naming bias and have a randomly shuffled order of AARF and Baseline inputs before showing to GPT-4o to avoid positional bias.\n\n### Lack of sensitivity experiments\nPlease provide sensitivity experiments to the numerous hyperparameters you introduced (see the section \"Too many hyperparameters\" for the hyperparameters)\n\n## Unclear writing\n- While being core concepts of the paper, Copying Heads (set A) Knowledge FFNs (set F) are not formally defined (line 381). I guess set A is built by taking top-k attention heads after sorting them by ECS while set B is built by taking top-k FFNs after sorting them by PKS, but I could not find it in text.\n- Strange ordering equations, for example, Eq. 2 that defines an important part of ECS has an undefined value “a” which is only introduced in Appendix Eq. 8.\n\n## Typos\n455: REDEPE\n\n## References\n\n[1] Kortukov, E., Rubinstein, A., Nguyen, E., & Oh, S.J. (2024). Studying Large Language Model Behaviors Under Context-Memory Conflicts With Real Documents.\n\n[2] Wu, Y., Zhu, J., Xu, S., Shum, K., Niu, C., Zhong, R., Song, J., & Zhang, T. (2023). RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models. Annual Meeting of the Association for Computational Linguistics.\n\n[3] Hu, X., Ru, D., Qiu, L., Guo, Q., Zhang, T., Xu, Y., Luo, Y., Liu, P., Zhang, Y., & Zhang, Z. (2024). RefChecker: Reference-based Fine-grained Hallucination Checker and Benchmark for Large Language Models. ArXiv, abs/2405.14486."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The paper offers valuable insights into how RAG-based LLMs produce hallucinated outputs. Building on these findings, it proposes a detection method and mitigation strategy grounded in this understanding. Presentation issues remain, particularly in the main figure explaining the method, yet the contribution is significant."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "Each step is thoughtfully motivated, with both conceptual reasoning and empirical validations in §3. The detection method shows effective results in Table 1, and the RAG truthfulness improves using AARF, as shown in Figure 6."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Retrieval-Augmented Generation (RAG) models are still prone to hallucinations. This paper explores the internal mechanisms behind hallucinations in RAG settings. Building on these insights, the authors propose a hallucination detection method, ReDeEP, and a RAG truthfulness improvement method, AARF."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Figure 3 is problematic. The starting point and flow of the diagram are unclear, with too many arrows, making it hard to identify the main computational paths. An effective graphic would show one main data processing pipeline, which is missing here. Additionally, the quantities computed are not well-defined. Panels (b) and (c) add no extra information and could be removed without loss.\n\nOtherwise, rather minor points:\n- l.281: Please describe the number of hallucinations and non-hallucinations (h = 0 and h = 1) in the evaluation set.\n- Pearson's Correlation in §3: Why measure Pearson’s correlation between ECS and hallucination labels (binary)? It would be more informative to report accuracy at a fixed threshold or detection metrics such as AUROC. Similarly, for PKS and hallucination, detection metrics like AUROC would be preferable.\n- l.465: Could you clarify the criteria for selecting thresholds for accuracy, recall, and F1?\n\nEven more nits:\n- Use full names for FFN, ReDeEP, and AARF, at least in the abstract.\n- In Figure 4(c), clarify what the colour bar values represent.\n- Overall, font sizes in the figures are too small.\n- Structure in §3.2 is difficult to follow. Stick to a standard structure using \\section, \\subsection, \\subsubsection, \\paragraph, etc., rather than introducing new hierarchies (boldface, underline, italics, numbering (1), (2), …)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Could you discuss more the trade-offs of your method? In particular thinking about real time settings? \n\n2. Have you tested your method on non-LLama models? Do you anticipate any challenges for different models? \n\n3. Could you provide some example outputs pre and post using AARF? Or can you speak to the effect AARF has on the coherence of the model’s output after AARF?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The development of the ECS and PKS metrics to understand the contributions external and internal knowledge have on the LLM's generation is a compelling and novel way to understand LLM outputs. \n\n2. They demonstrated great empirical validation by running extensive experiments across two datasets, three LLMs, and many baseline methods. \n\n3. They also introduce a method to curb hallucinations called AARF - which relates back to the introduced metrics nicely."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposes ReDeEP - a method to detect hallucinations by LLMs in retrieval augmented generation settings by using mechanistic interpretability. The authors introduce two novel metrics - (1) the External Context Score (ECS) and (2) Parametric Knowledge Score (PKS) to identify when hallucinations happen because of over reliance on internal knowledge or from the underuse of external information. The authors also introduce AARF (Add Attention Reduce FFN), which aims to adjust the weights of the attention heads, and feed forward layers to reduce hallucinations. Their approach is empirically validated on standard benchmarks, demonstrating superior performance to existing methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Performing this analysis at the token/chunk level might limit its practicality in real time or large scale settings - it would be nice to have a richer discussion of the trade-offs and real world feasibility. \n\n2. The experiments are extensive - however they are all with the LLama family of models - testing (even a much smaller set) on a different model would be informative. \n\n3. While the performance of AARF seems good (Figure 6) - it would be good to see some example outputs - its unclear how this could effect the model’s output in terms of coherence/writing in general."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose ReDeEP for detecting hallucinations in RAG models by decoupling external context and parametric knowledge, and AARF to reduce hallucinations by modulating the contributions of Knowledge FFNs and Copying Heads."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024redeep,\ntitle={ReDe{EP}: Detecting Hallucination in Retrieval-Augmented Generation via Mechanistic Interpretability},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ztzZDzgfrh},\nnote={under review}\n}"
},
"abstract": {
"value": "Retrieval-Augmented Generation (RAG) models are designed to incorporate external knowledge, reducing hallucinations caused by insufficient parametric (internal) knowledge. However, even with accurate and relevant retrieved content, RAG models can still produce hallucinations by generating outputs that conflict with the retrieved information. Detecting such hallucinations requires disentangling how Large Language Models (LLMs) balance external and parametric knowledge. Current detection methods often focus on one of these mechanisms or without decoupling their intertwined effects, making accurate detection difficult. In this paper, we investigate the internal mechanisms behind hallucinations in RAG scenarios. We discover hallucinations occur when the **Knowledge FFNs** in LLMs overemphasize parametric knowledge in the residual stream, while **Copying Heads** fail to effectively retain or integrate external knowledge from retrieved content. Based on these findings, we propose **ReDeEP**, a novel method that detects hallucinations by decoupling LLM’s utilization of external context and parametric knowledge. Our experiments show that ReDeEP significantly improves RAG hallucination detection accuracy. Additionally, we introduce AARF, which mitigates hallucinations by modulating the contributions of Knowledge FFNs and Copying Heads."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Retrieval-Augmented Generation Hallucination",
"Hallucination Detection",
"Mechanistic Interpretability"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/fd8dea05000b942c3cdc44472d69ed9943510d7a.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "ReDeEP: Detecting Hallucination in Retrieval-Augmented Generation via Mechanistic Interpretability"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zu7cBTPsDb | MVTokenFlow: High-quality 4D Content Generation using Multiview Token Flow | main | Active | 4D Generation;Dynamic 3D Gaussian Splatting;Dynamic Reconstruction;Diffusion Models | applications to computer vision, audio, language, and other modalities | 5;5;5;6 | 4;5;5;4 | 2;3;2;3 | 2;2;2;3 | 2;2;2;3 | 5.25 | 4.5 | 2.5 | 2.25 | 2.25 | -0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* Will the custom dataset be made available?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* method seems relatively clean without unneeded bells and whistles\n* qualitative results look good\n* quantitative metrics show improvement\n* ablation study is presented"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a method by which multiview images generated by Era3D independently per frame in a video can be adjusted to be temporally consistent. A coarse dynamic 3DGS reconstruction is made from the initial multiview videos and 2D flows are computed from these videos. These flow fields are used to ensure tokens associated by flow between frames are similar. The multiview images are regenerated from these modified tokens and then a final dynamic 3DGS reconstruction is built.\n\nResults are presented on Consistent4D and a self-collected dataset."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* Minor point, but I'd recommend the authors add a clear list of contributions at the end of the intro.\n\n* Presented videos in the supplemental are pretty limited. The columns also aren't labeled. I'm guessing the left most image and flow field correspond to the input and the right most is some arbitrary second view? It would have been nice to see some sort of orbit around the object as the video plays."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "No"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "The authors claim they compare with all open-sourced methods, but as far as I know, there are other open-sourced video-to-4d works which are not included for comparison, for example, DG4D and Diffusion^2. I'm not asking the authors to comapre with all open-sourced works, instead, I suggest to modify the inappropriate expression.\n\nBesides, I'm willing to raise my score if the authors address my concerns during rebuttal."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1) It is natural to incorporate knowledge of video generation and editing methods into 4D generation. TokenFlow is a reasonable attempt.\n2) The authors conduct comparisons and ablation studies to support their claims."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work focuses on dynamic object generation from a monocular video. The authors propose a two-stage approach, enabling a 3D diffusion model to generate temporally coherent multi-view videos based on reference view video. Specifically, they expand self-attention to all images of different timestamps and generate pseudo multi-view videos in the first stage. The pseudo videos are utilized to reconstruct coarse dynamic gaussians with motion trajectories. In the second stage, they propose a token propagation technique based on 2D optical flow rendered by dynamic gaussians, which helps the 3D diffusion model generate more temporally coherent videos. Experiments on public datasets validate the effectiveness of the proposed framework."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Token Propagation: As illustrated in Section 3.1, token propagation is only conducted between key frames and non-key frames, which means the temporal consistency among key frames is not guaranteed. Although the authors utilize enlarged self-attention for key frames, I think this cannot ensure full temporal consistency, especially when the objects have large movements.\n\n2. Experiments:\n\na) Dataset: The authors are encouraged to conduct qualitative experiments on the official Consistent4D dataset instead of its demo videos. The official dataset contains 7 synthetic objects with multi-view videos; please refer to the SV4D paper for the performance of other video-to-4D methods on that dataset.\n\nb) Comparison: The authors are encouraged to conduct novel-view synthesis comparisons with other methods if possible.\n\nc) Ablations: The authors are encouraged to conduct quantitative ablations and provide more qualitative analyses.\n\nd) Visualizations: The supplementary videos have only two views of the object, one of which is the input view, which does not provide readers with a full understanding of the performance of the proposed method. The authors are encouraged to provide videos with more views. Additionally, in Figure 3, the comparison with state-of-the-art methods is conducted in the input view, which may not be appropriate."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "(1) How long does it take for the coarse stage and the fine stage, and how much GPU memory is required for each stage? Also, how about the other baselines?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "(1) The task of video-to-4D content generation, while preserving the details (spatial and temporal information) of an input object, is challenging and well-motivated.\n\n(2) They appropriately chose the multi-view generation method Era3D for spatial consistency and the video generation method TokenFlow for temporal consistency, using the scene representation 3DGS, to construct their pipeline.\n\n(3) The proposed method achieves SOTA performance on video-to-4D generation compared to existing baselines.\n\n(4) The paper is easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper addresses the problem of multi-view and temporal consistency in video-to-4D content generation. The proposed method, MVTokenFlow, adopts Era3D for multi view video generation to create coarse 3DGS, and TokenFlow to enhance the temporal consistency of the coarse 3DGS. With this method, the authors are able to generate 4D content while preserving the detail of the content over the timeline. Experiments demonstrate the effectiveness of the proposed method in 4D generation, both quantitatively and qualitatively."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "(1) Era3D uses Row-wise Multiview Attention (RMA) to achieve multi-view consistency. In L232, the authors say that they use enlarged self-attention to improve the temporal consistency of multi-view video frames. Using $K=6$ viewpoints and a keyframe interval of 8 frames of video, the enlarged self-attention across all different timesteps may be quite memory-heavy. I think this process is a distinguishing feature compared to the previous work Era3D, but the authors don't provide an ablation study on how this component improves temporal consistency in the coarse stage.\n\n(2) As I understand it correctly, the coarse stage produces less temporal-consistent 3DGS, and the fine stage renders 2D flows to guide the re-generation of the multi-view images and create the final temporally consistent 3DGS. When describing this, Fig. 2 does not intuitively illustrate the process. It would be better if the two stages were completely separated.\n\n(3) The ablation studies on token propagation and flow loss for multi-view video generation show only qualitative results. Quantitative results (using the same metrics as in Tab. 1) are needed to show the generality of these modules.\n\n(4) Similar to point (3), the authors use the normal loss, but this feature is not ablated, either qualitatively or quantitatively."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please refer to the Weakness."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The enlarged self attention on the temporal dimension to maintain the temporal consistency.\n2. Using 2D flow to warp the feature as the additional constraint to improve the temporal consistency.\n3. The regeneration and refinement have been used to further improve the final performance.\n4. The experiment has been on Consistent4D dataset to demonstrate the performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed the enlarged self attention on the temporal dimension and incorporate the flow to warp the feature as the additional constraint to improve the temporal consistency. Moreover, the regeneration and refinement have been used to further improve the final performance. The experiment has been on Consistent4D dataset to demonstrate the performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The author does not present clearly about the motivation and main contributions of this work in the introduction part. \n2. In the related work, the author firstly claimed the diffusion model applied in 4D generation will be discussed, while in the section, 4D Scene Representation, only NeRF and 3DGS based method have been discussed. The related works need to be reorganized to ensure the logical coherence between each parts.\n3. Applying the self attention on the temporal dimension has been well studied and proved to be effective way to maintain the temporal consistency. Incorporating this strategy may not have enough novelty.\n4. No need to further training not have enough novelty since building the original temporal multi view diffusion model requires huge training while no need for retraining for enlarging the self attention is straight forward."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024mvtokenflow,\ntitle={{MVT}okenFlow: High-quality 4D Content Generation using Multiview Token Flow},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zu7cBTPsDb},\nnote={under review}\n}"
},
"abstract": {
"value": "In this paper, we present MVTokenFlow for high-quality 4D content creation from monocular videos. Recent advancements in generative models such as video diffusion models and multiview diffusion models enable us to create videos or 3D models. However, extending these generative models for dynamic 4D content creation is still a challenging task that requires the generated content to be consistent spatially and temporally. To address this challenge, MVTokenFlow utilizes the multiview diffusion model to generate multiview images on different timesteps, which attains spatial consistency across different viewpoints and allows us to reconstruct a reasonable coarse 4D field. Then, MVTokenFlow further regenerates all the multiview images using the rendered 2D flows as guidance. The 2D flows effectively associate pixels from different timesteps and improve the temporal consistency by reusing tokens in the regeneration process. Finally, the regenerated images are spatiotemporally consistent and utilized to refine the coarse 4D field to get a high-quality 4D field. Experiments demonstrate the effectiveness of our design and show significantly improved quality than baseline methods."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"4D Generation",
"Dynamic 3D Gaussian Splatting",
"Dynamic Reconstruction",
"Diffusion Models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/2b257c8541eded997430173fc8e491e4318cc169.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/721e5947fa7bd210871cfbad004deb789fc88285.zip"
},
"title": {
"value": "MVTokenFlow: High-quality 4D Content Generation using Multiview Token Flow"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zuKrRYM3Tg | Quantized Approximately Orthogonal Recurrent Neural Networks | main | Active | ecurrent neural networks;neural network quantization;orthogonal recurrent neural networks;quantization bitwidth | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 1;3;3;5 | 5;3;4;3 | 3;2;2;2 | 1;2;1;2 | 4;2;3;2 | 3 | 3.75 | 2.25 | 1.5 | 2.75 | -0.852803 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* It is unclear from the paper why you quantize U but not V. Could you please elaborate on that choice?\n* It is unclear to me why the ProjUNN is much better in the copy task but worth on MNIST and PTB (both quantized and unquantized model). Do you have any idea or intuition why this is the case?\n\n\n**Comments on presentation:**\n* $\\sigma$ is a bit overuse ($\\sigma, \\sigma_o$ but also $\\sigma_{min}, \\sigma_{max}$ for eigenvalues, …)\n* The authors reuse $T$ in line 258 again as a power of the weight matrix, which seem based on the text has not any connection to the sequence length T. I suggest the authors either use a different symbol or explain why to the power of $T$ would be relevant here.\n * I can see that this likely comes from the sequence length $T$, though given there is activation function $\\sigma$ (and $Ux$) there, it is unclear why this would be important or relate to the final task loss. If this is the case, please elaborate.\n* Many equations do not have numbers which makes it hard for readers to refer to them (even if authors do not refer themselves to some equations, a future reader might want to do so). Therefore I suggest to follow common best practices and number all equations that are not in-line."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The investigation on weight quantization effects on ORNNs is interesting and insightful (sec 3.4).\n* Based on the experiments the proposed QRNNs (especially with STE-Bjorck) seem to perform well, even till 5-6 bits.\n* The paper is clearly written and easy to follow (except a few minor points)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors study the quantization of orthogonal recurrent neural networks (ORNNs). They first investigate the impact of quantization on the orthogonality, including effects on the singular values of the quantized weights and deriving bounds for them. Then the authors propose and investigate two flavors of quantized orthogonal recurrent neural networks (QORNNs) which combine traditional ORNN training with quantization-aware training (QAT). In the experiments they show that QORNNs are competitive with SOTA ORNN, LSTM and FastRNN on a various benchmarks such as copying, flavors of MNIST and PTB."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The biggest shortcoming of this paper is the limited novelty. Several points regarding this:\n * The authors combine two off the shelf ORNN training algorithms with the most simple (and arguably outdates, see later) flavor of QAT. In other words, they add $q_k(W_i)$ to these algorithms (cf line 3, 4 in algorithms 1, 2, respectively) and assume STE.\n * While I found the investigation in sec 3.4 interesting (see above), I would have expected that these insights come back in their algorithm design (or at least that the authors evaluate such metrics for their proposed approach, look at question whether these metrics in practice correlate with QORNN performance etc).\n* On the quantization side, they seem to miss/ignore a lot of innovation that happened in the last 5ish years which could help to potentially have much better performance at low bit-widths. Most importantly:\n * They argue keeping the scale fixed is common practice (line 241/242). Since the introduction of LSQ [1] this is long not common practice anymore.\n * It is unclear to me why the authors not consider per-channel weights. As per-channel weights still work with full fixed-point/integer arithmetic [2], this is supported by almost any HW and commonly used by most quantization literature in the past years. This adds an additional degree of freedom that might be very helpful for ORNNs as by changing the scale ($\\alpha$), as one could ensure that rows (or columns, depending on notation) still have a norm of 1 which seems important (cf sec 3.2).\n* The proposed QORNNs are actually not orthogonal as the name or text suggest. Only the latent weights ($W_i$) are orthogonal, but the actual quantized weights used for inference ($q_k(W_i)$) are only approximately orthogonal. As the authors themselves show (cf figure 1, sec 3.4), this doesn’t give any guarantees and could be detrimental.\n* As the paper positions itself more as ‘explorative’ (and QAT contribution is very limited), I would expect that they also more closely explore PTQ approaches. There are several degrees of freedom that are unexplored, e.g. setting the correct scale (alpha) or applying/adapting common PTQ algorithms such as AdaRound [3] or GPTQ [4].\n* Minor:\n * The experimental evaluation is limited to only ‘toy-sized’ datasets/tasks. \n * While it is nice they obtain bounds for q_min/max, the established bounds are so loose that from a practical perspective such bounds are not useful (nor similar to the earlier study they are used to design or evaluate the algorithm).\n * I do miss a comparison to challenges in quantizing transformers or SSMs. While it is arguable whether comparing to transformers it out of the scope of such a paper (as the authors claim), at least discussing/comparing whether the challenges in ORNNs are similar or different to transformers/SSMs would be helpful (e.g. do they suffer from similar outliers as observed in transformers, cf. [5,6]). \n * Regarding SSMs, there is some recent work that would be interesting to compare to [7, 8]. \n\n**References:**\n* [1] Esser et al., ICLR 2020, Learned Step Size Quantization.\n* [2] Nagel et al., 2021, A White Paper on Neural Network Quantization.\n* [3] Nagel et al. ICML 2020, Up or Down? Adaptive Rounding for Post-Training Quantization.\n* [4] Frantar et al., ICLR 2023, Gptq: Accurate post-training quantization for generative pre-trained transformers.\n* [5] Bondarenko et al., EMNLP 2021, Understanding and Overcoming the Challenges of Efficient Transformer Quantization.\n* [6] Dettmers et al., NeurIPS 2022, LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale.\n* [7] Pierre et al., ES-FoMo 2024, Mamba-PTQ: Outlier Channels in Recurrent Large Language Models\n* [8] Abreu et al, NGSM 2024, Q-S5: Towards Quantized State Space Models"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1) Please provide a more detailed description of the experimental environment.\n2) What is the meaning of \"STE-Bjorck or STE-projUNN\" in Table 2? I think the experimental results of each model should be separated for clarity. This could be improved by separating the experimental results of each model into different columns.\n3) Why is STE-projUNN absent from Table 3?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper is easy to read, and the proposed QORNN is effective even at long-term dependency tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a Quantized Approximately Orthogonal Recurrent Neural Network (QORNN), which is the quantization of Orthogonal Recurrent Neural Network. To address the inherent instability of quantizing vanilla RNN and ORNN, two quantization-aware training strategies are adopted. The method achieves impressive results on a variety of standard benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- There are awkward and missing citations throughout the paper. For example, the citation for fastGRNN should have been first on line 114 instead of line 135. Besides, I think the representation, such as \"activations for LSTM Hou et al. (2017)\" in line 105, \"Penn TreeBank dataset Marcus et al. (1993) (PTB)\" is awkward.\n- There are some issues on writing (see minor issues and questions below)\n\nMinor Issues\n1) There are some minor points on writing:\n- Line 90: “The reasons .. is” -> “The reasons .. are”\n- Lines 122 and 321: footnotes are written incorrectly\n- Footnote 4: “Sections 3.4” should be revised correctly\n- Equation in line 203, footnote 5, caption of Table 3, and line 466: please add a comma to the end of the sentence\n- Line 286: add a colon to the next of the bolded sentence\n- Subtitle 3.4: “QORNN are” -> “QORNN is” or “QORNNs are”\n- Tabels 1 and 2, “fromKiani et al. (2022)” -> “from Kiani et al. (2022)”\n- Table 2, “sizes for Copy, MNIST” -> “sizes for Copy-task, sMNIST”\n2) Are the words \"steps\", \"power\", \"timesteps\" and \"length\" the same meaning? Mixed terms can confuse the reader. I recommend revising them for clarity. Other examples include copy/copy-task, SSMs/SSM/SSSM, etc."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* In Tables 1-2, could you explain what do you mean with “NU” (‘Not useful because of other figures’)?\n* What is the memory increase during training for the proposed methods compared to FastRNN, LSTM and vanilla RNN?\n* In Table 1, it would be nice if authors could also include accuracy for copy-task, as it is easier to interpret the differences compared to cross-entropy numbers.\n* Any reasoning why STE-Björk is faster than FastRNN and FastGRNN on pMNIST in Table 14? It seems quite surprising given that at every optimizer step Björck procedure requires running 15 iterations of the recursion with subsequent backpropagation through this procedure. While it's not the case for some other tasks, providing a detailed analysis of the computational complexity or runtime breakdown for STE-Björk vs. FastRNN/FastGRNN might be quite insightful."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* Seems fairly easy to implement.\n* Authors thoroughly motivate and explain challenges of constructing and training ORNNs & instabilities caused by quantization. \n* A comparison of model sizes in resulting QORNN models vs. other methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores the quantization of the weight matrices in Orthogonal Recurrent Neural Nets (ORNNs). Authors introduce and compare two strategies for constructing ORNNs or approximately orthogonal RNNs with quantized weights (QORNNs): STE-projUNN and STE-Björck. These strategies are extensions of two methods of constructing full-precision ORNNs, respectively projUNN and Björck, that employ quantization-aware training with Straight-through assumption (STE).\n\nQORNNs are evaluated on a synthetic copy-task, PTB and few variants of MNIST (pMNIST, sMNIST) datasets and compared against their floating-point variants and other families of RNNs, including LSTM, GRU, fastRNN, fastGRNN. The most efficient models achieve results similar to state-of-the-art full-precision ORNN, LSTM and FastRNN on a variety of standard benchmarks, even with 3-bits quantization."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The proposed methods seem very straightforward: combine the strategy of constructing ORNNs with STE, which is a standard and well-known technique in the quantization literature.\n* L286-L300: authors derive bounds for approximate orthogonality of q(W) which they themselves state are too loose to be useful in practice. It would be quite insightful to track and report $||W_q W_q^T – I ||$ during training, to see if the reason of proposed method working well in practice is due to $q(W)$ being fairly close to being orthogonal or not.\n* L079: authors claimed SotA results on pMNIST, however they did not compare against other 4-bit sequence-to-sequence models, for instance SSM models such as Mamba [1], transformer models such as LLaMA-3 [2] etc. \n* Considered benchmarks are very small by today standards. While most of them seem standard in ORNN literature, it would make a story more convincing if authors included some of the more recent real-world datasets and benchmarks. For instance, it would be insightful to evaluate the proposed methods on some of common reasoning language tasks (MMLU, HellaSwag, Winogrande).\n\n[1] Gu et al., “Mamba: Linear-Time Sequence Modeling with Selective State Spaces”. ArXiV: 2312.00752\n\n[2] Dubey et al., “The Llama 3 Herd of Models”. ArXiV: 2407.21783"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Why not further reduce the precision of activations by adopting static learned scales akin to LSQ or PACT?"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This a very well-written paper with an extensive literature review, a great introduction to ORRNs and their challenges, many experiments and a thorough appendix with many implementation details for reproducibility"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces Quantized Orthogonal Recurrent Neural Networks (QORNNs) to address the memory and computational limitations of traditional Orthogonal Recurrent Neural Networks (ORNNs) on compact devices. While ORNNs are valued for their capability to handle long-term dependencies in tasks like the copy-task, their reliance on full-precision weights makes them unsuitable for deployment in resource-constrained environments. The authors propose two methods for quantizing ORNNs: Quantization-Aware Training (QAT) with orthogonal projections and post-training quantization for activations, ensuring both efficiency and stability in the recurrent weight matrices. These methods allow QORNNs to maintain the benefits of orthogonality while using fewer bits for weight representation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There is unfortunately not any real innovation in the paper. The only contribution of the paper is applying QAT with a straight-through estimator (a very old idea in quantization literature) to existing optimization methods for learning ORNNs. In fact, the QAT technique is not even state-of-the-art as they could learn the quantization ranges $\\alpha$ through gradient by adopting LSQ$^{[1]}$ or PACT^{[2]}. \n\nThe theoretical analysis of the impact of quantization in orthogonal matrices is not optimal. It is well known that using MinMax quantization for low-bit quantization (< 6 bits) leads to significant degradation. This is why search-based methods are normally adopted that find the scale or range that minimizes the Forbenious ( or other norms) between quantized and unquantized vectors (commonly known as MSE-based ranges). It would be more interesting to see the plots of Figures 1 & 2 using optimal $\\alpha$ values rather than ones based on the maximum range. \n\n[1] LSQ: Learned Step Size Quantization, Esser et al. \n[2] PACT: Parameterized Clipping Activation for Quantized Neural Networks"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This paper explores the quantization of orthogonal recurrent neural networks, and analyses the impact of the quantizer bitwidth on model performance."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024quantized,\ntitle={Quantized Approximately Orthogonal Recurrent Neural Networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zuKrRYM3Tg},\nnote={under review}\n}"
},
"abstract": {
"value": "In recent years, Orthogonal Recurrent Neural Networks (ORNNs) have gained popularity due to their ability to manage tasks involving long-term dependencies, such as the copy task, and their linear complexity. However, existing ORNNs utilize full precision weights and activations, which prevents their deployment on compact devices.\n\nIn this paper, we explore the quantization of the weight matrices in ORNNs, leading to Quantized approximately Orthogonal RNNs (QORNNs). The construction of such networks remained an open problem, acknowledged for its inherent instability. We propose and investigate two strategies to learn QORNN by combining quantization-aware training (QAT) and orthogonal projections. We also study post-training quantization of the activations for pure integer computation of the recurrent loop. The most efficient models achieve results similar to state-of-the-art full-precision ORNN, LSTM and FastRNN on a variety of standard benchmarks, even with 3-bits quantization."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"ecurrent neural networks",
"neural network quantization",
"orthogonal recurrent neural networks",
"quantization bitwidth"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/6ad5969f13e1f6c3b7e019a5072c629170e11449.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Quantized Approximately Orthogonal Recurrent Neural Networks"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zuOnOAHBMy | HELENE: Hessian Layer-wise Clipping and Gradient Annealing for Accelerating Fine-tuning LLM with Zeroth-order Optimization | main | Active | optimization;large language models | foundation or frontier models, including LLMs | 3;3;3;5 | 4;4;4;3 | 2;2;3;2 | 1;2;2;3 | 3;2;3;2 | 3.5 | 3.75 | 2.25 | 2 | 2.5 | -1 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The 20× speedup is largely theoretical here, as HELENE’s slower convergence due to SPSA may offset this benefit in practice. \ncould you kindly provide a detailed profiling log comparing the actual run time?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "HELENE’s integration of annealed A-GNB gradients and layer-wise clipping shows the authors' awareness of the specific computational nuances of finetuning LLM architectures, the discussion of EMA showed the and other related discussion showed good motivation.\n\nPlus, memory saving is always crucial for better finetuning LLMs these days, HELENE's layer-wise approach is a solid step to conserve memory."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes HELENE, which builds upon prior work, MeZO, by integrating a second-order preconditioning method designed to achieve faster, more stable convergence while maintaining a low memory footprint. The authors evaluate HELENE on prominent models, RoBERTa-large and OPT-1.3B, and report promising results in terms of speedup and accuracy."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "In the core algorithm:\n- Gradient Computation (Steps 4–5): HELENE computes the gradient based on Simultaneous Perturbation Stochastic Approximation (SPSA), a zeroth-order optimization technique, which allows it to approximate gradients without needing backpropagation. This step saves memory, which is important for large models. However, SPSA tends to converge more slowly than direct gradient methods, and while the authors use annealing, it’s unclear if this fully mitigates the slower convergence.\n- Annealing Schedule (Step 6): The annealing parameter α=Anneal(t) in Step 6 adjusts the moving average coefficient based on iteration count. the improvement here compared to EMA is not obvious in figure 5? (is this a wrong figure link)\n- Although layerwise clipping seems beneficial, it also introduces additional hyperparameters and tuning complexity.\n- The authors assert that the inclusion of the Hessian diagonal significantly improves convergence rates, but diagonal Hessian approximation methods generally struggle to capture the full curvature dynamics in deep networks. \nFor HELENE to have a true advantage, empirical evidence comparing convergence rates with Adam and other optimizers is crucial:\nthe accuracy improvement of 1.5% might not fully justify the added implementation and tuning complexity, especially if simpler optimizers (like Adam or AdamW)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. I would like to know the GPU memory usage of HELENE in your experiments. In theory, HELENE’s memory requirements should exceed those of LoRA, so it’s important to provide a detailed comparison with LoRA (without MeZO). Could you please provide a table listing the actual memory consumption and performance results of HELENE, MeZO, LoRA, and full-parameter Adam fine-tuning when fine-tuning OPT-1.3B with the same batch size?\n2. What is the time overhead of HELENE compared to the original MeZO? Could you please report the wall clock time per iteration for both HELENE and MeZO when fine-tuning OPT-1.3B with the same batch size?\n3. In Section 3.3.1 on the annealing mechanism, the paper mentions that this mechanism helps reduce the impact of noisy or outdated gradients in the later stages of training. However, since $\\alpha$ decreases as the time step increases, the actual weight of momentum from earlier steps actually increases, seemingly contradicting the claim that it “reduces the impact of past gradients.” Could you clarify this statement? Also, please explain the phrasing in Line 258: “reducing the learning rate as training progresses”?\n4. In Algorithm 2, Statement 4 appears without clear derivation. Could you please explain how this statement was derived?\n5. From my understanding, HELENE seems decoupled from MeZO, meaning it should theoretically be applicable in standard first-order optimization as well. Is my understanding correct? If so, do you have any preliminary results showing whether HELENE is effective in first-order methods?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The design of HELENE is elegant, addressing various issues that arise during the optimization process. The incorporation of mechanisms such as gradient annealing and layer-wise clipping demonstrates a thoughtful approach to enhancing training stability and convergence.\n2. The analysis of the convergence steps is welcome, providing insights into the efficiency of the optimization process and enhancing the overall contribution of the paper.\n3. Experimental results provide evidence of HELENE's improvements over MeZO, particularly in fine-tuning tasks with RoBERTa-large and OPT-1.3B, showcasing the effectiveness of the proposed method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a novel zeroth-order fine-tuning method, HELENE, designed to enhance the efficiency and stability of model training through three core components. First, HELENE introduces an annealing mechanism in the momentum's exponential moving average (EMA) to mitigate bias in the SPSA-estimated gradients and reduce the impact of noise in the gradients during the later stages of training. Second, it proposes a new estimator for the Hessian matrix, known as the Asymptotic Gauss-Newton-Bartlett (A-GNB) estimator, which enables diagonal Hessian estimation without the need for label sampling, simplifying the computation process. Finally, HELENE implements layer-wise Hessian clipping, which more effectively preserves essential second-order information, ultimately improving the convergence and stability of the optimization process. Experimental results on RoBERTa-large and OPT-1.3B demonstrate that HELENE achieves an improvement over MeZO, with notable gains in both convergence speed and model performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The presentation and structure of the paper exhibit significant issues. While the paper claims to introduce a zeroth-order optimization method, the entire methodology section appears disconnected from both zeroth-order principles and MeZO. Three key components lack intuitive connections, making them seem disjointed at least from the writing perspective.\n2. HELENE incurs substantial memory overhead compared to MeZO. The introduction of momentum and Hessian as optimizer states brings memory costs close to full parameter fine-tuning, which is significantly higher than that of parameter-efficient fine-tuning methods like LoRA.\n3. The experimental component is lacking, as there are no evaluations on larger models such as LLaMA2-7B or LLaMA2-13B. Additionally, the ablation study is insufficiently detailed. Given the three key designs in HELENE, it would be beneficial to create three variants, each removing one of the designs to observe the impact on performance.\n4. The writing contains typos. For example, Line 134 states \"first-order methods like MeZO,\" which should be corrected to \"zeroth-order methods like MeZO.\" In statement 1 of Algorithm 1, there is a repeated $\\epsilon$, and \"hyperparameters $\\lambda_i$\" should be written as \"hyperparameters {$ \\lambda_i $}.\""
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Is there any proof or reference that shows A-GNB is an unbiased estimator of Hessian? *A-GNB's construction is similar to the empirical Fisher's construction on the diagonal part*, and the expectation of empirical Fisher is known to *not equal* to the Hessian. On the other hand, Fisher information matrix needs label sampling from the model and its expectation is equal to the negative of Hessian.\n\n2. The ICL baselines in Table 2 seem dubiously weak for some trials (ICL show no or minimal improvements from the zeroshot in RTE, WSC, COPA, and ReCoRD). For COPA, the ICL's performance is even worse than zeroshot's."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. A-GNB estimator removes the need to sample label from the model.\n2. The experiments are comprehensive."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces Helene with 3 components adds to MeZO: (1) A-GNB estimator of Hessian diagonal without label sampling (2) layerwise adaptive clipping (3) momentum update annealing. The convergence steps improve from Sophia's $O(d)$ to $O(\\text{max}_i d_i)$. The experiments follow the settings of MeZO and are conducted for RoBERTa-large and OPT-1.3B across multiple tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The theory is significantly mismatched with the experiments. The theory is actually proven under the case of first-order gradients, but the method and the whole experiments are performed under zeroth-order SPSA estimated gradient (this is obfuscated by their notations in Algorithm 1 that I believe they should use $\\hat{g}$ instead of $g$ when they are referring to ZO estimated gradient. I also checked their codes and the experiments are fully in ZO). A direct transfer from FO or second-order optimizer's convergence rate to ZO convergence rate is *a nontrivial task* and usually *unapplicable*. \n \n2. I don't see the annealing momentum and A-GNB is used in Lemma 10. If I understand correctly, the Lemma 10 applies to the case that we have exact gradient and clipped Hessian diagonal, but Algorithm 1 uses estimators for both. \n\n3. A comparison with other ZO optimizers, such as HiZZO [1], that also leverage second-order information to address the varying parameter curvature issue is missing. \n\n4. By employing \"A-GNB estimator\" that uses true labels, Helene's Hessian estimator becomes clipped second moment of (SPSA-estimated) gradient, which is also shown in their code. The difference from Helene and Adam seems only to be (1) clipping second-moment term, (2) update second-moment term in less frequency, and (3) annealing momentum update. In this case, I would doubt how Helene outperforms Adam in general cases. From Figure 5a, it seems that the greatest performance boost from MeZO to Helene is actually momentum annealing. \n\n\nThe first weakness is critical and I would vote for a reject score at this moment.\n\n[1] Zhao, Yanjun, et al. \"Second-order fine-tuning without pain for llms: A hessian informed zeroth-order optimizer.\" arXiv preprint arXiv:2402.15173 (2024)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "How does this method compare to properly tuned low-memory first-order methods (like AdaLomo, GaLore, ...)?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Proposed method is better than MeZO."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes to incorporate diagnonal hessian information to improve zero-order optimization methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The practical utility of the proposed optimizer is very questionable.\n\nDuring training and fine-tuning, the GPU memory is consumed by two things:\na) Parameters + optimizer buffers.\nb) Stored forward pass activations.\n\nLet's call the memory needed for parameters P. Let's call memory required for storing activations G.\nG can be easily reduced by doing gradient checkpointing (this is oneliner with current models).\nAlso, as the model's size grows, G becomes much smaller than P (because G scales linearly with model width and P quadratically). \nWhen doing ordinary Adam, one needs 4P + G memory.\nWhen doing something like LoRA or GaLORE, one needs P + G + <a little> memory. \nEverybody is doing LoRA & friends because gradient activations are small, and storing parameters more than once is a big problem.\nIn extreme case, we are doing LoRA over quantized parameters (QLoRA), which needs P/4 + G memory. Again, testament than in practice G is much smaller than P.\nAlso, it is possible to drop most of the optimizer buffers and update parameters during a backward pass (like AdaLOMO), which again takes P + G memory.\n\nYet, this method just proposes using 2P memory because, in addition to parameters, it also needs to store diagonal Hessians. \nIf I have space for 2P memory, I can probably make my batch size very small and just use SGD with momentum (note that I can do tricks like AdaLOMO does and not store more than one computed gradient).\nI just cannot imagine any setting where I would want to use this method."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce HELENE, an optimizer that incorporates layer-wise clipped diagonal Hessian together with gradient annealing for accelerating fine-tuning of LLMs."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024helene,\ntitle={{HELENE}: Hessian Layer-wise Clipping and Gradient Annealing for Accelerating Fine-tuning {LLM} with Zeroth-order Optimization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zuOnOAHBMy},\nnote={under review}\n}"
},
"abstract": {
"value": "Fine-tuning large language models (LLMs) poses significant memory challenges, as the back-propagation process demands extensive resources, especially with growing model sizes. Recent work, MeZO, addresses this issue using a zeroth-order (ZO) optimization method, which reduces memory consumption by matching the usage to the inference phase. However, MeZO experiences slow convergence due to varying curvatures across model parameters. To overcome this limitation, we introduce HELENE, a novel scalable and memory-efficient optimizer that integrates annealed A-GNB gradients with a diagonal Hessian estimation and layer-wise clipping, serving as a second-order pre-conditioner. This combination allows for faster and more stable convergence. Our theoretical analysis demonstrates that HELENE improves convergence rates, particularly for models with heterogeneous layer dimensions, by reducing the dependency on the total parameter space dimension. Instead, the method scales with the largest layer dimension, making it highly suitable for modern LLM architectures. Experimental results on RoBERTa-large and OPT-1.3B across multiple tasks show that HELENE achieves up to a 20× speedup compared to MeZO, with average accuracy improvements of 1.5%. Furthermore, HELENE remains compatible with both full parameter tuning and parameter-efficient fine-tuning (PEFT), outperforming several state-of-the-art optimizers. The codes will be released after reviewing."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"optimization",
"large language models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/398a49e82074eedab0f00d7846431f1a2ed3a09d.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/5575a9535ba0dca3c8b0cc19534e042b96b0f5c7.zip"
},
"title": {
"value": "HELENE: Hessian Layer-wise Clipping and Gradient Annealing for Accelerating Fine-tuning LLM with Zeroth-order Optimization"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zuuhtmK1Ub | Differentiable Implicit Solver on Graph Neural Networks for Forward and Inverse Problems | main | Active | Graph Neural Networks;Differentiable solvers;Implicit schemes;Numerical modelling;Inverse problems | applications to physical sciences (physics, chemistry, biology, etc.) | 1;1;3;3 | 3;3;4;3 | 2;2;1;1 | 1;1;2;2 | 1;1;1;1 | 2 | 3.25 | 1.5 | 1.5 | 1 | 0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. In line 74, the authors discuss the limitations of automatic differentiation in JAX. Further elaboration on this limitation would improve the motivation for this approach.\n\n2. Graph neural networks are frequently used to manage unstructured grid points. Since the paper emphasizes integration with the finite volume method with its local conservation property in line 58, it would be beneficial to include experiments validating these conservation properties. \n\n3. Equation (12) appears to involve matrix inversion on the right-hand side of the proposed gradient formulation. Could the authors address whether this matrix inversion contributes to computational costs, comparable to previous methods?\n\n4. In Equations (12) and (13), gradient computations are proposed. An alternative approach might involve solving the implicit equation through optimization techniques commonly used in deep learning, such as constructing a minimization problem for Equation (5) combined with a data loss function, potentially avoiding matrix inversion. Could the authors discuss this approach?\n\n5. While the finite volume method can accommodate various boundary conditions, the paper considers only Neumann boundary conditions. Is there a specific reason for this choice?\n\n6. In line 253, the authors claim lower computational costs for their method than the explicit Euler scheme, which requires smaller time steps. Could the authors provide a detailed comparison of the computational cost per each time step to support this claim?\n\n7. In Figure 2, the initial scale of the loss is relatively high, making it difficult to assess whether the loss converges to zero after 160 epochs. Given that a coarser grid, intended to reduce computation, may negatively impact estimation accuracy even if the loss function converges to zero, a guideline for determining sufficient loss minimization would be beneficial.\n\n8. In Figures 3 and 4, the recovered permeability only captures general trends rather than precise values. However, the proposed method in (c) accurately approximates the true data distribution, which suggests that the problem may be inherently ill-posed, where the coefficient may not be unique in this setting. Could the authors clarify whether this issue arises from the problem or the numerical method?\n\n9. All experiments utilize a large number of data points, which may facilitate finding a solution. Additional experimental results with fewer grid points would strengthen the paper."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "By employing an implicit scheme with optimized gradient computation, the proposed method reduces the required number of time steps. \nThey present a differentiable framework for both forward and inverse methods, enabling a learnable numerical approach based on discrete time steps. \nAdditionally, the paper explores applications in inverse problems, often employing irregular unstructured grids as used in practical scenarios."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel framework that combines graph neural networks with the finite volume method, to address implicit schemes. (There was a challenge that differential equation solvers typically avoid due to the additional computation complexity of handling implicit equations.)"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While the underlying idea is promising, the paper would benefit from stronger experimental or theoretical justification for the proposed methodology. Additional clarity and motivation for the approach would enhance the paper’s impact."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "* What would be the limitation of the method?\n* What would be the potential benefit of using machine learning for linear PDE over classical methods?"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The work tries to build a framework that works with mesh coarsening, forward, and inverse problems."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The work considers mesh coarsening, forward, and inverse problems and investigates the implicit solver. In the numerical experiments, the authors evaluated the performance of the method regarding each problem."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The novelty of the work is limited. Incorporating FVM into GNN is not new and considered in, e.g., [Jessica et al. ICML 2024 https://arxiv.org/abs/2311.14464 ] and [Horie et al. ICML 2024 https://arxiv.org/abs/2405.16183v1 ]. The construction of gradients presented in Section 2.3 seems strongly related to the adjoint method, which is a standard way to deal with inverse problems. The implicit method for GNN is considered in the area of implicit GNNs, e.g., [Gu et al. NeurIPS 2020 https://arxiv.org/abs/2009.06211 ]. The authors state that these are their novelty, but there is existing work for each. The authors should cite these works and clarify the added novelty from the authors.\n* The evaluation is weak. There is only one baseline for the experiment in Section 3.2 and nothing for the ones in Section 3.3 and 3.4. With the current form, the reviewer cannot asses the effectiveness and superiority of the model.\n* The presentation is not clear. The figure may miss the labels (a), (b), and so on for Figures 2, 3, and 4. It is not clear what is \"data 1\", \"fitting 1\", \"data 2\", and \"fitting 2\" in Figures 2 and 3."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "* Why should we only consider the Neumann boundary in equation (1)? Is it difficult to consider other Robin boundaries?\n\n* I don't understand the role of R in equation (4). What does it mean as a measurement operator?\n\n* Does S(theta) change depending on the equation of the PDE to be solved? Can you explain this further?\n\n* In equation (8), we need to find A^{-1} in the end. Isn't the cost for this large?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "Figure 1 effectively illustrates the overall pipeline, demonstrating experimental results that apply the combination of GNN and FVM to both forward and inverse problems."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes an integrated approach for solving forward and inverse problems by creating a new pipeline that combines Graph Neural Networks (GNNs) with Finite Volume Methods (FVM) to enable automatic differentiation with implicit-time stepping."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "First and foremost, the paper feels incomplete. The biggest concern is the lack of discussion about other approaches that use GNNs or integrate FVM with deep learning to solve PDEs. A “Related Work” section should be added to explain how the proposed model differs from recent studies and highlight its novelty. Although Section 2 on theory explains the problem setup to some extent, more detailed steps and methods for training the proposed approach should be included. Section 3, the experimental part, merely lists the results for forward and inverse problems without discussing how this method compares to existing GNN- and FVM-based approaches. For instance, the study \"Learning to Solve PDE-constrained Inverse Problems with Graph Networks\" solves inverse problems using GNNs—how does the proposed method differ from this approach, and what advantages does it offer? Experimentally, does it outperform in solving inverse problems?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "See weakness"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The question is interesting and combining GNN with finite element method seems natural."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores the use of graph neural networks for solving forward and inverse problems and particularly focuses on the incorporation of implicit solver. However, the writing is subpar and the procedures and advantages are not well explained. The experiments are also lack comparison with other methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The writing is subpar. There are many typos and grammatical errors. For example, \"Compute $\\nabla_bL$ with (12), whats is equivalent so the solution of a single linear system.\" should be \"Compute $\\nabla_bL$ with (12), which is equivalent to solving a single linear system.\"\n2. One main focus of this paper is the incorporation of implicit solver. However, using an iterative solver and in a deep learning setting is well-studied in the Deep Equilibrium Models (DEQ) literature. The authors should compare their method with DEQ.\n3. The experiments are not very convincing. The results in Section 3.4 is very poor and in no experiments the authors compare their method with other methods."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We integrated graph neural networks (GNNs), the finite-volume method, implicit time-stepping to develop a fully differentiable modeling pipeline and applied it to forward and inverse problems of geoscience."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024differentiable,\ntitle={Differentiable Implicit Solver on Graph Neural Networks for Forward and Inverse Problems},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zuuhtmK1Ub},\nnote={under review}\n}"
},
"abstract": {
"value": "Partial differential equations (PDEs) on unstructured grids can be solved using message passing on a graph neural network (GNN). Implicit time-stepping schemes are often favored, especially for parabolic PDEs, due to their stability properties. In this work, we develop a fully differentiable implicit solver for unstructured grids. We evaluate its performance across four key tasks: a) forward modeling of stiff evolutionary and static problems; b) the inverse problem of estimating equation coefficients; c) the inverse problem of estimating the right-hand side; and d) graph coarsening to accelerate forward modeling. The increased stability and differentiability of our solver enable excellent results in reducing the complexity of forward modeling and efficiently solving related inverse problems. This makes it a promising tool for geoscience and other physics-based applications."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Graph Neural Networks",
"Differentiable solvers",
"Implicit schemes",
"Numerical modelling",
"Inverse problems"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b8e1b8badd26a8159eb0a9af740e5b7a6ca78b74.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to physical sciences (physics, chemistry, biology, etc.)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Differentiable Implicit Solver on Graph Neural Networks for Forward and Inverse Problems"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zv9jedBExg | Role of Momentum in Smoothing Objective Function and Generalizability of Deep Neural Networks | main | Active | deep learning theory;degree of smoothing;generalizability;nonconvex optimization;SGD with momentum;smoothing property | optimization | 3;3;3;6 | 4;4;4;4 | 1;2;2;3 | 2;1;2;3 | 2;2;3;3 | 3.75 | 4 | 2 | 2 | 2.5 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "NA"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- I am also curious about Nesterov's version of momentum, which is also covered by the QHM formulation if I remember correctly. Does it show similar behavior as SHB in terms of the degree of smoothing?\n- I am curious about whether there exists a method that can somehow adapt to the degree of smoothing, similar to sharpness-aware minimization (SAM) that adapts to the sharpness."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The concept of search direction noise is novel. By framing momentum’s effects through this new type of noise, the authors provide a clearer understanding of how momentum contributes to generalization.\n- The theoretical analysis is solid, and the empirical results back it up nicely. The authors link specific hyperparameters to the degree of smoothing, which should make these insights practically valuable for tuning models.\n- This paper is well written, with a clear flow of ideas."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper takes on an interesting challenge by exploring the role of momentum in SGD and its effect on both smoothing and generalizability. The authors consider the question: if momentum reduces gradient noise, how does it still end up improving generalizability? By leveraging the idea of search direction noise—a unique take on the noise added by momentum—the authors offer a plausible explanation that ties smoothing directly to the model’s generalization performance. Their approach combines theoretical insights with experiments, primarily focused on SHB (Stochastic Heavy Ball) and QHM (Quasi-Hyperbolic Momentum) variants."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- In section 4.2, the authors use the critical batch size to estimate the variance of the stochastic gradient of certain optimizer, which seems quite strange to me. Can't we just store the iterates $\\lbrace x_0, \\ldots, x_t \\rbrace$, and the batches to directly compute the variance, which should be much more accurate? Using this ground truth variance would verify the connection between critical batch size and variance.\n- While the ResNet18 and CIFAR100 experiments are useful, they feel a bit narrow. Adding experiments on a variety of architectures, like Transformers or larger datasets (e.g., ImageNet), would help make the conclusions feel more robust and applicable across different tasks."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Can you provide more clarification to the above weaknesses?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper provides clear images and illustrations for their experimental results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a measure of smoothing effect for momentum methods and shows its impact on optimization and generalization. The main comparison is between three optimizers SGD, SHB and NSHB."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The theory in this paper is flawed. This includes\n\n1. The claim on line 254 is questionable, and as a result the viewpoint of function smoothing is invalid. The search direction differences $\\omega_t^{SHB}$, $\\omega_t^{QHM}$ are not Guassians in general. Even if the paper claims to have identified that their probability density looks like a gaussian curve, they may not be isotropic and its shape is crucial to the optimization and generalization performance of SGD and SHB (e.g. https://arxiv.org/abs/1803.00195). Even if it is isotropic gaussian, line 254 may also be incorrect as Theorem 3.1 only gives upper bounds of the Guassian norm, so one cannot establish equality in line 254.\n\n2. From figure 1 and line 282-285 the authors depict a different trend of smoothing change across batch sizes between SHB and QHM (or NSHB), which is inaccurate. In figure 1 the trend is only given by the upper bound estimate (3)-(5) and the difference in trend may be only due to the fact that (4) and (5) use different ways to estimate the upper bounds, which are not tight. For instance at an infinity batch size, as there is no noise at all in all the gradients, there should not be any noise in the SHB updates. Actually QHM is only a reparameterization of SHB so their degree of smoothing should follow similar trends. This point is also corroborated by Theorem 4.1 & 4.2 that the bounds for QHM and SHB are similar and the phenomenon in figure 1 is not observed here.\n\nIt is also confusing to the readers that the smoothing measures seem to have no connection to the result of theorem 4.1&4.2, and the results of theorem 4.1&4.2 seem to have little connections to the actual optimization and generalization behavior in the general non-convex optimization regime.\n\n3. It has been established by previous work (e.g. https://arxiv.org/abs/2307.15196) that the degree of smoothing is not responsible for explaining generalization for momentum, which is contradictory to the claim of the current paper. The experimental results in this paper are well-observed by previous works that QHM and SGD have similar behaviors; and the behavior difference QHM and SHB is due to a change of effective learning rate. By setting SHB to have learning rate $\\eta(1-\\beta)$, previous work also shows that the new SHB and SGD have similar performances with different degrees of smoothing."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please see the weaknesses"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "-Inspired by random smoothing, this work proposed a novel concept ``degree of smoothing’’ which are connected to gradient noise and generalization. This perspective is reasonable to me. \n\n-The ``degree of smoothing’’ can help theoretically analyze SGD, SHB, and QHM. The theoretical results are meaningful.\n\n-Deriving the critical batch size is interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the role of momentum in smoothing objective functions from a gradient noise perspective and how it affects generalizability of DNNs. First, it analyzed the ``degree of smoothing’’. Second, it estimated the crtical batch size and the variance of stochastic gradients. Third, it empiricallu studied how the generalization depend on the degree of smoothing. Finally, it discussed the reported results and explained ``the contradiction that exists between momentum and stochastic noise’’."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "-While the theoretical results are meaningful for gradient noise/variance analysis, they do not directly capture the connection to the generalization.\n\n-The empirical evidences are not comprehensive enough. The experimental results only suggest the connection between generalization and the degree of smooth. The connections between generalization and some statistic are very common. I cannot how the ``degree of smooth’’ can predict generalization.\n\n-SGD in PyTorch is SHB rather than NSHB. Line 80 made a false statement.\n\n-The work needs to significantly improve literature review, including recent works on momentum, gradient noise & generalization. \n\n-This work tends to make overclaim and ignore the contributions of a lot previous relevant works. The missed references including (but not limited to): [1] studied the convergence of momentum; [2] studied gradient noise/generalization of momentum; and a lot of studies analyze how gradient noise affect minima sharpness/generalization. Please carefully survey the papers on momentum, gradient noise & generalization.\n\nRefs:\n\n[1] Yan, Y., Yang, T., Li, Z., Lin, Q., & Yang, Y. (2018, July). A unified analysis of stochastic momentum methods for deep learning. In Proceedings of the 27th International Joint Conference on Artificial Intelligence (pp. 2955-2961).\n \n[2] Xie, Z., Yuan, L., Zhu, Z., & Sugiyama, M. (2021, July). Positive-negative momentum: Manipulating stochastic gradient noise to improve generalization. In International Conference on Machine Learning (pp. 11448-11458). PMLR."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- In line 1786, it is states that \"These results demonstrate that each search direction noise, follows a normal distribution.\". Can you run statistical test, such as Kolmogorov-Smirnov test, for this fact ?\n- Is the Gaussian argument in line 255 an hypothesis of the paper or a demonstrated fact ?\n- Can you precise why increasing the degree of smoothness will lead to a flatter landscape around local minimum (which lead to higher generalizability) ?\n- Can you justify why Assumption (A1) is realistic ?\n- The assumption 2.1 (A2) (ii), is-it realistic to have a constant independent of $x_t$ ? For my point of view, Figure 5 does not permit to conclude that the assumption is realistic. In fact, for a finite number of iteration, this assumption will be verify but for a non-defined number of iterations, it is not clear.\n- Assumption A1 seems to imply A4, can you discuss this implication ?\n- Can you give me an interpretation for Assumption 2.1 (A4) ?\n- Line 273 : the process is a \"gradient descent in sense of expectation\", what do you mean exactly ? Equation (2) is not exactly a gradient descent, the argument of $\\hat f_{\\eta \\psi^{SHB}}$ is $y_t$ and not $\\mathbb{E}_{\\omega_t^{SHB}}(y_t)$.\n- Why there are two parameters $\\nu$ and $\\beta$ in Algorithm 2 ? For me since $\\nu > 0$, it is a NSHB Algorithm with parameter $\\nu \\beta$.\n- Theorem 4.1 and 4.2 are named convergence analysis but they do not prove that the sequence converges (even in a weak sens) to a local minimum or a critical point of $f$. Moreover Proposition A.2 suggests to minimize by $0$, $<x_t-x, \\nabla f(x_t)>$ and not maximize it as in Theorem 4.1 and 4.2. Can you provide more explanation about the exact implication of these results ? Have this kind of convergence formulation been used in previous works ?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "- The paper gives a good introduction to the problem of understanding momentum in a stochastic context. \n- The interesting notion of \"search direction noise\" is introduces for the analysis, instead of the stochastic noise of the gradient estimator.\n- Experiments are run in realistic context with neural network image classification."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focus on the role of momentum for SGD. The authors explain theoretically the improvement induce by some momentum strategies, including SHB and QHM. They argue that momentum induce a degree of smoothness, that they quantify, on the objective function. This lead to an explanation for the better generalizability of SHB compare to SGD. In fact, flat critical point generalize better. Finally, they provide numerical experiments on CIFAR100 and CIFAR10 image classification to support their theoretical claims."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Major weaknesses:\n- In line 255, the authors claims that $\\omega_t^{SHB}$ is Gaussian. However, in the proof of Theorem 3.1 (Appendix C.2), no proof of this fact is given. This Gaussian argument is studied for a pratical point of view in Appendix D.2. This statement is supported only by histograms, no statistical tests are provided with histograms, it is hard for me to deduce that these histograms are sample from a Gaussian distribution. Unfortunately, this Gaussian argument is the key point of the paper. In fact the degree of smoothness relies on this Gaussian property.\n- Assumption (A1) is not verify for a large class of function including quadratic function, which is comonly used in machine learning. Therefore Theorem 4.1, 4.2, Proposition 4.1 do not hold for quadratic objective function.\n\nMinor weaknesses:\n- The paper has many typos and over-complex formulation such as:\n\t- line 85 : the sentence \"This paper focuses on SHB and QHM, which covers many momentum methods, especially NSHB, but does not cover SHB.\" is confusing for me.\n\t- line 184 : \"let $G_{\\xi_t}(x)$ be the stochatsic gradient of $f$ at $x$\", do you mean that $G_{\\xi_t}(x)$ is an unbiased stochastic estimation of the gradient $\\nabla f$ ?\n\t- In Lemma 2.1, the function $f$ is supposed to be $L_f$ Lipschitz, this is not clearly state.\n\t- Assumption 2.1 (A1) Is it $f$ or $f_i$ that is $L_f$ Lipschitz ?\n\t- $\\mathcal{S}$ has not been introduced before Assumption 2.1 (A3). Is it the set of all data points $z_i$ ?\n\t- line 267 : There is a $\\eta$ missing in the expression, it must be $f(y_t - \\eta \\phi^{SHB} u_t)$.\n\t- Assumption 4.1 is equivalent to suppose that the sequence $x_t$ is bounded. If Assumption 4.1 is verify then the sequence is bounded $\\|x_t\\| \\le D(0)$. For instance, in Theorem 4.1 of [1], the authors directly suppose that the sequence is bounded. I suggest to simplify this assumption into \"the sequence $x_t$ is bounded\", which is a common assumption in stochastic gradient descent algorithms analysis.\n\t- line 487 \"there is an impressive correlation between the degree of smoothness and model generalizability\" : it seems to not be a mathematical correlation but a more complex relation between these two quantities. In fact, if the degree of smoothness is too small or too large, model generalizability is low.\n\t- line 1578 : there is a typo in the latex compilation.\n\n[1] Diederik P Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations, pp.1–15, 2015."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024role,\ntitle={Role of Momentum in Smoothing Objective Function and Generalizability of Deep Neural Networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zv9jedBExg},\nnote={under review}\n}"
},
"abstract": {
"value": "For nonconvex objective functions, including deep neural networks, stochastic gradient descent (SGD) with momentum has faster convergence and better generalizability than SGD without momentum, but a theoretical explanation for this is lacking. Adding momentum is thought to reduce stochastic noise, but several studies have argued that stochastic noise actually contributes to the generalizability of the model, which raises a contradiction. We show that the stochastic noise in SGD with momentum smoothes the objective function, the degree of which is determined by the learning rate, the batch size, the momentum factor, the variance of the stochastic gradient, and the upper bound of the gradient norm. By numerically deriving the stochastic noise level in SGD with and without momentum, we provide theoretical findings that help explain the training dynamics of SGD with momentum, which were not explained by previous studies on convergence and stability, and that resolve the contradiction. We also provide experimental results for an image classification task using ResNets that support our assertion that model generalizability depends on the stochastic noise level."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"deep learning theory",
"degree of smoothing",
"generalizability",
"nonconvex optimization",
"SGD with momentum",
"smoothing property"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/fb8121a8a4c21e3105a4041d41a66eacce16cbee.pdf"
},
"presentation": null,
"primary_area": {
"value": "optimization"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Role of Momentum in Smoothing Objective Function and Generalizability of Deep Neural Networks"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zvYJ1qG1Fy | Parameter Space Representation Learning on Mixed-type Data | main | Active | Representation learning; Parameter space; Diffusion model; Bayesian flow networks | generative models | 3;3;5;5 | 3;3;3;2 | 2;3;3;2 | 2;2;2;3 | 1;1;2;2 | 4 | 2.75 | 2.5 | 2.25 | 1.5 | -0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Why is the evaluation protocol different for continuous and discrete data, and apart from changing the parameter space, are there other differences between the continuous and discrete models?\n- Can you elaborate on the significance of the lambda parameter in tables 1 and 4?\n- Looking at table 1, you report significantly higher FID scores for DiffAE than was reported in the cited paper. Can you please explain why this is?\n- Can you please elaborate on the following statement for figure 4? “(b), the learned semantics exhibit progressive, time-varying changes”"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- Novel and well-motivated approach to parameter space representation learning. The unification of modeling of different data modalities should be of high practical value.\n- The paper provides a clear description of the fundamentals of the proposed method, clearly illustrating the introduced core components.\n- Comprehensive empirical evaluation on both generation and downstream tasks.\n- The proposed method performs well on the paper's benchmarks (though there is a large discrepancy between the results reported here and in the baseline methods papers that should be addressed)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper treats representation learning and introduces ParamReL, an extension to Bayesian Flow Networks to learn parameter space latents semantics. The motivation is the ability of BFNs to unify representation learning to mixed data modalities. The extension allows learning low dimensional latents by introducing two new components: a self-encoder that learns low-dimensional latent semantics from intermediate parameters and a conditional decoder that generates outputs based on both the latents and parameters. Here these are implemented as a U-Net. These are trained by optimizing a loss capturing the ELBO and a weighted mutual information term between the latent and parameters. ParamReL permits sampling and reverse-sampling which are both described in the paper, and allow reconstruction and interpolation of samples.\n\nThe experiments are across multiple datasets and their binarized versions (MNIST, FashionMNIST, CelebA, CIFAR10, Shapes3D). The method's representations are evaluated on downstream classification problems. Additionally, ParamReL is evaluated on reconstruction, interpolation and disentanglement. These results are compared to various baselines. All the objective results presented favor variants of ParamReL over the other baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper refers to itself as a method to learn latent semantics of mixed data types (title, motivation, etc). Yet this ability is not experimentally verified, all experiments are on individual data modalities.\n- Having to specify a hyperparameter to control the trade-off between the reconstruction quality of the model and the representation quality will be difficult in practice.\n- Details beyond the core components would benefit from clarification across the paper. Examples include:\n - The experimental protocol, including how the various metrics such as FID were evaluated.\n - The exact parameterizations for the different experiments. There seems to be some detail in table 2, though how exactly it relates to e.g. table 1 should be clarified.\n - The total compute time for training\n- There is no ablation study. In particular there are no experiments without the mutual information score and there is no comparison to the original BFN.\n- The linked code repository is empty."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. Diffusion-based representation learning algorithms like InfoDiffusion infer a latent representation from data space directly into latent space. Your proposal infers latent representations from parameter space. Both the data space in InfoDiffusion and the parameter space in your case have the same dimension, don’t they?. What would you say is the reason for the better performance of ParamReL, as compared with InfoDiffusion?\n2. Opposite to InfoDiffusion, Rombach et al. [2022], whom the authors cite, trains a diffusion model directly on representation space. Has anyone tried to study the quality of the representations learned by Rombach et al. [2022] *from continuous-data* as you do in Table 1? What would you say are the key differences between their representations and yours, *in the continuous-data case*? What makes your approach better?\n3. Is the prior distribution over the latent code $p(\\mathbf{z}\\_t)$ independent of time? In line 171 you write that it \"follows a Gaussian distribution\", but do the parameters of the Gaussian change with time? If not, why did you decide to constrain the time-dependent posterior with a static prior? \n4. Did you include the actual expressions for the inverse of the Bayesian update function $h^{-1}$ somewhere in the paper? I don’t manage to find them.\n\n**References**\n\n- Wang et al. [2023]: InfoDiffusion: Representation Learning Using Information Maximizing Diffusion Models\n- Rombach et al. [2022]: High-Resolution Image Synthesis with Latent Diffusion Models"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This work represents an elegant alternative to diffusion-based models for representation learning;\n2. The authors demonstrate that their methodology can indeed by used to infer latent representations from both continuous and discrete data, and compare the quality of said representations against those of both classical and very recent baselines, which gives credibility to their results (see however the weaknesses below);\n3. The authors investigate the quality of the content encoded by their inferred representations with a large set of experiments;\n4. This work could motivate further research that aim to increase our understanding of the type of information that is encoded into the BFNs' sequential process."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Inspired by recent works which demonstrate that the pretrained (network) parameters in diffusion models encode sensible, semantic information that can be leveraged in downstream tasks (see e.g. Baranchuk et al., 2021), the authors proposed a modification to the recently introduced Bayesian flow networks (BFNs), to infer low-dimensional latent representations encoding similar semantic information. \n\nLike diffusion models, which are trained to progressively denoise some artificially corrupted input data, BFNs learn to progressively modify the parameters of their so-called *output distribution*. Note that, since the BFNs’ dynamics take place in the parameter space of the output distribution, they can readily be used to handle discrete and continuous data. Also note that the dimension of the parameters space and that of the data is the same in BFNs.\n\nThe key ideas behind the authors’ proposal are (i) to introduce a *self-encoder*, which maps the time-dependent, progressively-modified parameter of BFNs into a lower dimensional latent representation; and (ii) to modify the output distribution of BFNs to take as input not only its progressively-learned parameter, but also the newly inferred latent representation. To ensure effective learning of these representations, the authors maximize the mutual information between the distribution's parameters and the latent code.\n\nThrough a series of experiments, the authors demonstrate the quality of the inferred low-dimensional representations, which outperform those from state-of-the-art diffusion-based approaches.\n\n**References**\n\nBaranchuk et al. [2021]: Label-Efficient Semantic Segmentation with Diffusion Models"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Despite its merits, especially their many experiments, I think this work needs significant rewriting before it can be published.\n\n1. The paper contains numerous typos and grammatical mistakes, which make it difficult for readers to understand its content. The paragraphs in lines 71-78 and 181-186 are but two examples. Other paragraphs not only contain such mistakes but are also written in a way that makes them difficult for readers to follow. Unfortunately, most of the experimental section, specially section 5.3, feature such problems. Thus, despite the numerous experiments, the writing style makes it complicated for the reader to go through the experimental section and, consequently, successfully judge the proposed method. I also note that Table 1 is never referenced in the main text. \n\n2. Although the authors compare against recent diffusion-based representation learning algorithms, they do not explain what the differences are between these baselines and their proposal. Why is their proposal interesting, beyond the ability of BFNs to naturally handle discrete and continuous data, as compared to the baselines? More importantly, why does it work better than the baselines? I’d suggest the authors include such discussions either in their related work or the experiment section. Likewise, I’d suggest that they also include answers to questions 1 and 2 below, or at least some content in the same direction.\n\n3. Many of the details and reasoning behind some aspects of the model are left out, or at least I couldn’t find them in the paper. See for example questions 3 and 4 below. Adding such information back into the paper will improve its presentation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "**1 Introduction**\n- The authors correctly cite prior work highlighting the challenges of mixed-type data representation learning. However, explicitly mentioning real-world applications that use mixed-type data latent representations could clarify the motivation of this work. Additionally, it would help to articulate the specific advantages of parameter-space representation learning over output-space representation. (l. 37-42)\n\n- The promise of using Bayesian Flow Networks (BFNs) as a foundation would be more compelling if the authors provided a clearer intuition on why BFNs are better suited for handling mixed-type data than diffusion models and how they refine parametric distributions. Furthermore, what limitations prevent BFNs from capturing low-dimensional latent semantics? How would a latent representation benefit the model: are we aiming to enable latent space operations, similar to those seen in VAEs and GANs, such as latent walks? (l. 43-48)\n\n- The authors refer to “parameters of BFNs” where it might be clearer to specify “parameters of the distribution produced by BFNs” (e.g., line 53). Distinguishing these concepts would add clarity. Additionally, I thought that “parameter space representation learning” referred to learning embeddings of the model’s own parameters, which supports model transferability and interpretability. Since ParamReL instead learns representations of the parameters of the probability distribution generated by BFNs, emphasizing this distinction could help avoid reader confusion.\n\n**2 Understanding Bayesian Flow Networks -- An Alternative View**\n- The first paragraph suggests an “alternative view” of BFNs, yet it’s not immediately clear how this perspective diverges from the original one. Additionally, the point about the accessibility of BFN concepts in the original formulation might not be necessary.\n\n- How do BFNs avoid the expressiveness limitations seen in VAEs, where the variational distribution can yield overly simplistic distributions and result in sample over-smoothing?\n\n**3 ParamReL: Parameter Space Representation Learning**\n- In lines 140-162, it is unclear which specific contributions come from Baranchuk et al. (2021), Rombach et al. (2022), and Luo et al. (2024). Providing clarity on these references would improve the reader’s understanding.\n\n- Some networks are indicated with indexed parameters (e.g., the encoder q_{\\theta}), while others, such as \\psi, are not. Consistent notation would make it clearer which terms refer to networks.\n\n- I am not familiar with BFNs, but I was wondering why are “the series of latent semantics $\\{z_t\\}_{t=1}^T$ expected to exhibit progressive semantic changes (e.g., age, smile, skin color)” if they encode the parameters of the data distribution at each time-step (line 177)? Given that the entire distribution is refined over time, what drives the latents across time-steps to model intra-distribution features like age or skin color? Is there a formal justification or an explanation to support this claim? \n\n- Could the authors clarify the notation “p(. |-)” in line 252-253, as \"-\" is not immediately familiar?\n\n- How does the size of the latent representation $z$ compare to that of the parameters $\\theta$?\n\n**4 Related Work** \n- In describing diffusion models, it may be helpful to avoid saying they are “unsuitable” for representation learning without a clear definition of what is meant by “representation learning.” If the authors refer to representation learning as achieving a continuous, lower-dimensional latent space, this distinction should be made clear. Otherwise, the later point about pre-trained diffusion models being used for other tasks, which is related to representation learning, could appear contradictory.\n\n- How does ParamReL differ from InfoDiffusion in terms of learning latent representations?\n\n**5 Experiments**\n- The authors claim to address mixed-type data yet only experiment with discrete and continuous distributions—no mixed-type data is tested, and the only discrete data is binary. To strengthen the contrast with methods like InfoDiffusion, more emphasis on mixed-type data experiments, especially with categorical data, would be helpful.\n\n- In line 330, could the authors define MMD?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- ParamReL represents a promising new direction for defining latent semantics in mixed-type data.\n- By incorporating a self-encoder within the BFN framework, the authors establish a progressive encoding process that captures latent semantics across multiple steps. This design is a contribution as it supports disentanglement and time progression within the latent space. The inclusion of a mutual information term in the variational objective is well-justified.\n- The method demonstrates high-quality results across standard benchmarks, including binarized MNIST, binarized FashionMNIST, CelebA, CIFAR10, and Shapes3D."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors introduce ParamReL, a parameter space representation learning framework, to enable low-dimensional latent representation learning for mixed-type data within Bayesian Flow Networks (BFNs). ParamReL incorporates a self-encoder network within BFNs to progressively encode the annealed parametric distributions of BFNs. The latents are then used to condition the BFN’s decoder. They formulate a variational learning objective regularized by a mutual information term to enhance disentangled representation learning. Experiments on binarized MNIST, binarized FashionMNIST, CelebA, CIFAR10, and Shapes3D demonstrate that ParamReL achieves high-quality sample generation and effective latent representation learning across discrete and continuous data types."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The writing would benefit from greater precision and clarity. Specific points for improvement are provided in the questions and comments. (e.g., “parameters of BFNs” vs. “parameters of BFN-produced distributions”) \n- The proposed method appears closely related to infoDiffusion, with the main adaptation being its application to BFNs. It would strengthen the paper if the authors explicitly outlined the unique contributions of ParamReL, distinguishing it from infoDiffusion by clarifying any innovations that aren't due to BFNs. \n- The presentation begins with the question, “How to learn latent semantics in parameter spaces rather than in observation spaces of *mixed-type data* comprising continuous, discrete, and even discretized observations?” However, the experiments are limited to discrete and continuous distributions, with no testing on mixed-type data. Additionally, the discrete data tested is exclusively binary.\n- The linked GitHub repository currently includes only a README.md file, with no implementation code available."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "None."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- In lines 236-237 the authors mention “Given the straightforward definition of Bayesian update function h(·), its inverse operation is generally easy to derive. The details of such results can be found in Figure 14.” However, Figure 14 does not provide any details on the Bayesian update function. Were the authors referring to Table 2?\n- Have the authors tried to optimize the ELBO directly instead of using an additional MI term? It would be interesting to see where the former goes wrong and to justify why the MI term is actually needed.\n- In Section 3.5, the caption is called “Variational Inference for Intractable Joint Distribution”. However, the joint distribution seems to be tractable, the authors even give an expression.\n- How long does it take to train the method in comparison to other methods?\n- What does the ATTRS column in Table 1 indicate? Is that mentioned somewhere?\n- What is the influence of $\\gamma,\\lambda$? What happens if I set them low very low/high?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The proposed framework works for mixed-type data, which is often not the case for other representation learning frameworks.\n- Visualizations from latent traversals indicate that ParamReL indeed learns semantics in the data\n- Based on AUROC + FID, the results suggest that the method is able to learn meaningful representations while being able to reconstruct the observations\n- Recap of Bayesian Flow networks and visualizations helps to i) understand the method better and ii) understand the difference between BFNs"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces ParamReL, a framework that aims to answer the question of how one can learn latent semantics in parameter spaces rather than observation spaces of mixed-type data. The work builds on Bayesian Flow Networks (BFNs), which can model mixed-type data by operating in the parameter space. \n\nHowever, BFNs cannot capture high-level latent semantics in data. To that end, the authors additionally introduce a sequence of (low-dimensional) latent semantics $z_t$ obtained through a self-encoder and trained by maximizing a lower bound on the marginal log-likelihood."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The motivation in Section 3 why mutual information regularization is needed is not clear to me. In particular, the authors mention that “Considering that we cannot optimize this object directly, we can rewrite it by factorizing the rate term into mutual information and total correlation (TC)\" without explaining what the “rate term” and “TC” is. Even after reading Appendix B, it is still unclear to me.\n- It would be great if the authors could add a small section in the Appendix where they define and explain the meaning of the different performance criteria used in their work.\n- The results are reported for two different hyperparameters $\\gamma,\\lambda$. However, $\\lambda$ is never introduced in the main text of the paper. After searching the Appendix I could find it in Appendix B2 where it is introduced without mentioning why it is necessary. There is another $\\lambda$ in Eq. 9), however, I doubt that it has the same meaning as in the experiment table."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024parameter,\ntitle={Parameter Space Representation Learning on Mixed-type Data},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zvYJ1qG1Fy},\nnote={under review}\n}"
},
"abstract": {
"value": "A significant challenge in representation learning is to capture latent semantics in data mixing continuous, discrete, and even discretized observations (called mixedtype data), encountering issues like inconsistent discoveries and redundant modeling. Recently, Bayesian flow networks (BFNs) offer a unified strategy to represent such mixed-type data in the parameter space but cannot learn low-dimensional latent semantics since BFNs assume the size of parameters being the same as that of observations. This raises a new important question: how to learn latent semantics in parameter spaces rather than in observation spaces of mixed-type data? Accordingly, we propose a novel unified parameter space representation learning framework, ParamReL, which extracts progressive latent semantics in parameter spaces of mixed-type data. In ParamReL, a self-encoder learns latent semantics from intermediate parameters rather than observations. The learned semantics are then integrated into BFNs to efficiently learn unified representations of mixed-type data. Additionally, a reverse-sampling procedure can empower BFNs for tasks including input reconstruction and interpolation. Extensive experiments verify the effectiveness of ParamReL in learning parameter space representations for latent interpolation, disentanglement, time-varying conditional reconstruction, and conditional generation. The code is available at https: //anonymous.4open.science/r/ICLR25-F087/README.md."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Representation learning; Parameter space; Diffusion model; Bayesian flow networks"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/042ae30e5ec41de7d49b3245247792cad8f71a9b.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Parameter Space Representation Learning on Mixed-type Data"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zvaiz3FjA9 | Designing Concise ConvNets with Columnar Stages | main | Active | Convolutional Neural Networks;Columnar Stages;Input Replication;Image Classification;Detection | applications to computer vision, audio, language, and other modalities | 3;6;6 | 5;3;4 | 3;3;3 | 3;3;3 | 3;3;3 | 5 | 4 | 3 | 3 | 3 | -0.866025 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. what's the difference between group conv and parallel columnar conv?\n2. How much of efficiency and accuracy gain can be translated downstream to segmentation and detection work?\n3. What's the memory overhead with the input replication?\n4. How much depth is required for deep projection to have significant impact?\n5. I see some similarity between PFF and self-attention modules in transformer. What's the performance like if using some fusing all columns simultaneously?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This paper revisits some of the fundamental design ideas in conv net and proposed some interesting ideas. \n1. Shallow-deep projection is quite interesting. This inherits ideas from ResNet and expands to deep connection. \n2. It achieves competitive performances (accuracy and latency) with reduced network depth and parameters counts. \n3. It also introduces a pairwise frequent fusion (PFF) to fuse information across different columns."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces Columnar Stage Network (CoSNet) to deploy parallel conv units with fewer kernels, and reduce the 1x1 conv layers. To optimize the model efficiency, this paper follows the design objectives to reduce depth and branching, as well as to control the parameter growth, maintain computation intensity and uniform primitive operations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Please refer to the questions section, where some clarity or more experiments would be great."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Some details are missing. For example, how is the value of parallel convolution M determined? I think that different values of M will affect the performance. Please explain this details in the text. Other minor issues, such as Section 3.4 is missing in Figure 2 (c), and you should add it.\n2. How is the design like \"input replication\" to improving performance for example? Authors need to give some details in the manuscript.\n3. The related work is comprehensive. However, the authors only highlight the salient features of the previous works that they apply in their network. The manuscript can benefit from discussing shortcomings of the existing methods as research gaps in the section \"Related Work\"."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The writing is easy to read and clearly explains everything in the paper.\n2. The experimental result is good compared to the previous works. Empirically, the method seems to offer strong accuracy, compared to existing methods with similar architectures."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a simple and concise structure called CoSNet, which has smaller depth, low parameter count, low FLOPs, and attention less operations, well suited for resource-constrained deploy. The work presents a range of experiments that sufficiently support its claims. It is very interesting for readers.\n\nOverall, it is a good read. The manuscript might get better if a few suggestions (given below) are incorporated."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Some details are missing. For example, how is the value of parallel convolution M determined? I think that different values of M will affect the performance. Please explain this details in the text. Other minor issues, such as Section 3.4 is missing in Figure 2 (c), and you should add it.\n2. How is the design like \"input replication\" to improving performance for example? Authors need to give some details in the manuscript.\n3. The related work is comprehensive. However, the authors only highlight the salient features of the previous works that they apply in their network. The manuscript can benefit from discussing shortcomings of the existing methods as research gaps in the section \"Related Work\"."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "As listed in Weaknesses"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The motivation is reasonable. \n2. The result is comparable with the state-of-the-art.\n3. The paper is easy to understand."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a refreshing ConvNet macro design called Columnar Stage Network (CoSNet) with smaller depth, low parameter count, low FLOPs, and attention-less operations. \nIts comprehensive evaluations show that CoSNet rivals many renowned ConvNets and Transformer designs under resource-constrained scenarios."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Typo: VanillaNet is published in NeurIPS 2023, instead of 2024.\n2. Lack of some new comparison methods, all models were published in 2023 and even earlier. \n The author should provide more comparisons like InceptionNeXt[1] and UniRepLKNet [2]. \n3. The Top-1 accuracy of EfficientNet-B0 is 76.3 [3] or 77.1 [4], but the author gives a much poorer result of 75.1. \n Similar problems also happen on ConvNeXt-T (82.1 in [5] but 81.8 in this paper) and EfficientViT-M5 (77.1 in [6] but 76.8 in this paper, and 522M FLOPs in [6] and 600M in this paper). \n\n\n[1] InceptionNeXt: When Inception Meets ConvNeXt. CVPR 2024\n[2] UniRepLKNet: A Universal Perception Large-Kernel ConvNet for Audio, Video, Point Cloud, Time-Series and Image Recognition. CVPR 2024\n[3] EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. ICML 2019\n[4] EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv 2019\n[5] A ConvNet for the 2020s. CVPR 2022\n[6] EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention. CVPR 2023"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A simple and accurate ConvNet backbone for resource constraints scenarios"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024designing,\ntitle={Designing Concise ConvNets with Columnar Stages},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zvaiz3FjA9},\nnote={under review}\n}"
},
"abstract": {
"value": "In the era of vision Transformers, the recent success of VanillaNet shows the huge\npotential of simple and concise convolutional neural networks (ConvNets). Where\nsuch models mainly focus on runtime, it is also crucial to simultaneously focus\non other aspects, e.g., FLOPs, parameters, etc, to strengthen their utility further.\nTo this end, we introduce a refreshing ConvNet macro design called Columnar\nStage Network (CoSNet). CoSNet has a systematically developed simple and\nconcise structure, smaller depth, low parameter count, low FLOPs, and attention-\nless operations, well suited for resource-constrained deployment. The key novelty\nof CoSNet is deploying parallel convolutions with fewer kernels fed by input\nreplication, using columnar stacking of these convolutions, and minimizing the use\nof 1×1 convolution layers. Our comprehensive evaluations show that CoSNet rivals\nmany renowned ConvNets and Transformer designs under resource-constrained\nscenarios. Pretrained models shall be open-sourced."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Convolutional Neural Networks",
"Columnar Stages",
"Input Replication",
"Image Classification",
"Detection"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/996ff8da29c7634c6ce8a3a818c7e7063a93c94f.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Designing Concise ConvNets with Columnar Stages"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zvoM1Wastw | A Provable Quantile Regression Adapter via Transfer Learning | main | Withdraw | Transfer Learning;Adaptation;Quantile Regression;High-dimensional Statistics;Convergence Rate | transfer learning, meta learning, and lifelong learning | Rushuai Yang;Aiqi Zhang;Chenjia Bai;Xiu Su;Yi Chen | ~Rushuai_Yang1;~Aiqi_Zhang1;~Chenjia_Bai2;~Xiu_Su1;~Yi_Chen18 | 3;3;3;5 | 4;4;3;3 | 2;3;2;2 | 2;1;1;1 | 2;3;2;3 | 3.5 | 3.5 | 2.25 | 1.25 | 2.5 | -0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "Dear Program Committee and Reviewers,\n\nAfter careful consideration, we have decided to withdraw our paper from the conference review process. While we appreciate the valuable feedback provided by the reviewers, we believe that addressing these comments thoroughly will require additional time and resources to improve the quality of our work beyond the rebuttal phase.\n\nWe would like to express our gratitude for the constructive feedback, which has provided valuable insights into how we can enhance our research. We look forward to carefully implementing these suggestions and potentially resubmitting an improved version to a future venue.\n\nThank you once again for your time and consideration."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": {
"value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors."
}
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please refer to strengths and weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper is well-presented and easy to follow.\n\nAdapting transfer learning to quantile regression is novel \n\nThe paper provides thorough theoretical analysis of proposed method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a transfer learning approach for quantile regression via adapter-tuning, which enables risk-sensitive adaptation in pretrained models. It introduces a quantile regression adapter that injects sparse, low-rank parameters into a pretrained model to enhance sample efficiency and performance. The proposed quantile regression adapter is equipped with performance guarantee and supported by empirical evidence."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper highlights the effectiveness of transfer learning techniques in fine-tuning large pretrained models, but lacking methods and experiments on applying the proposed approach to certain models.\n\nThe paper focuses on linear adapter, while this offers theoretical clarity, it is unclear of how to extend the proposed method to non-linear applications. Linear adapter alone has limited applicability.\n\nThe left side of equation $(4)$ should be $\\rho_{\\tau}(y-f(x;\\theta))$ instead of $\\rho_{\\tau}(x)$, as the function $\\rho_\\tau(\\cdot)$ should depend on $y-f(x;\\theta)$ rather than directly on $x$ to be consistent with equation $(3)$."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See above."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is well-written and easy to follow. The quantile objective is well-motivated and the theoretical concepts are well explained."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the adapter-finetuning strategy for the quantile objective. It developed theoretical guarantee under linear models and regularity conditions on the sparsity of the parameter, and uses numerical experiments to demonstrate the effectiveness of the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My main concern is the technical contribution of the work.\n\nThe theoretical analysis:\n- To me, the theoretical results of the paper directly apply the existing analyses from high-dimensional statistics and sparse (quantile) regression. The setup of the analysis is separate from the context of adapter fine-tuning. Specifically, if we focus on the $\\delta^*$, then all the assumptions in Section 3 are about $\\delta^*$, and all the results in Section 3 can be readily implied from the existing tools based on an analysis for $\\delta^*$ and completely detached from the adapter fine-tuning context. \n\nThe numerical experiments: \n- The numerical experiments are in small-sample and low-dimensional regime, which to me, doesn't reflect the application scenarios for adapter fine-tuning. More experiments should be done under a more relevant context such as fine-tuning of larger models.\n\nMinor comments on the technical proofs/analyses:\n\nOverall, the paper might benefit from a more careful proofreading for the technical parts. I might misunderstand some part, but I will list my confusion in below for authors' reference. \n\n- The formula (26) is correct, but might be loose, can you reduce the order of d from 1 to 1/2?\n- From formula (30) to (31), you seem to use the inequality e^{|a| + |b|} <= e^{a+b}+e^{-a-b}. But this inequality is not true, for example, when a = 1, b = -1.\n- In line 954, I am confused by \"Last equality ... column j\". I didn't see this assumption in previous text.\n- In formula (39), what is q? And why (39) imply formula (48)?\n- In line 1031, I believe u should be tau. And why E[-v(tau- 1{w<=0})] = 0? You seem to assume delta^{hat} is unbiased.\n- In line 1142, the second part of lambda* should be sqrt{d/s}||v||_2. In formula (65), does the first part come from the plugging in of the first term of lambda*? If so, you seem to be missing a term that comes from underline{f}d/lambda ||v||_2^2. The second part should be sqrt{ds}||v||_2.\n- In line 143, j should be in set [d]. Same as line 282.\n- In line 145, the sum of i should start at 1, not 0.\n- In line 147, only semi-positive matrices have matrix square roots.\n- The formula (4) is wrong, the left-hand side should be rho(y-f(x, theta)), or correspondingly change the right-hand side.\n- In line 201, I believe the correct one is \"to learn theta*\", not theta_{s}*.\n- Could you give a reference for definition 3.2? In the book by Wainwright (2019), Definition 7.12 does not seem to have the term sqrt(|S|).\n- In line 365, I believe the precise expression is \"of the order O(d^{1/2}/n_s^{1/2})\".\n- In line 836, you should modify \"The our estimator\".\n- In line 887, |ab|_1 has no meaning, maybe you should write |a^Tb|.\n- In formula (25), the first one should be equality, and the second one should be inequality.\n- In line 1030, I believe the correct one is v = x'(delta^{hat} - delta^{tilde}). Same in formula (47)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "It is beneficial if the authors can test the proposed method on real-world transfer learning tasks using nonlinear (neural network based) pre-trained model (e.g, Mistral-7B, Llama3-8B)."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The proposed research direction—designing transfer learning approaches for risk-sensitive tasks—is an important open problem to solve. The authors have conducted both empirical and theoretical analyses to test and demonstrate the effectiveness of the proposed quantile regression approach in sample-efficient transfer learning."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a quantile regression-based transfer learning algorithm designed to transfer knowledge to risk-sensitive downstream tasks. The authors introduce a measure to theoretically quantify the transferability of knowledge and provide statistical guarantees for adaptation efficiency within a linear structural model. They also evaluate the adaptation performance of the algorithm through numerical simulations. The results show the proposed method outperforms the baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper's presentation is somewhat misleading. Although the introduction extensively discusses the adapter-tuning strategy in LLMs, the proposed transfer learning approach (linear structural model) cannot be applied to this field, which may confuse readers about the scope and application of the research.\n\nThe authors only study on transfer learning under a linear structural model. Both the empirical and theoretical analyses are based on this setting. I am concerned about how well this quantile regression approach will perform in real-world transfer learning tasks. There can be a significant gap when applying the proposed algorithm to transfer learning frameworks that utilize general pre-trained models, which typically involve nonlinear neural network structures such as large language models (LLMs). Additionally, the baselines used for comparison in the study are limited. The experiments are conducted solely on synthetic data, and no real-world transfer learning tasks are included."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "In Line 271, you mentioned \"Incorporating this error into the analysis of $\\widehat{\\theta}$ is nontrivial\". Could you please briefly explain where this nontriviality lies?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper considers a setting that can be summarized as a regularized quantile regression. The problem is interesting by itself. The results in the paper are all supported by rigorous proof."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the linear quantile regression problem under a high-dimensional and transfer learning setting. Specifically, it assumes that the source distribution of $(X, Y)$ satisfies $Q_{\\tau}(Y|X) = X^\\top \\theta_s$ and the target distribution satisfies $Q_{\\tau}(Y|X) = X^\\top \\theta$, where $\\theta_s \\approx \\theta$. Then, under the high-dimension setting with limited data from the target distribution, the paper proposes to learn the $\\theta - \\theta_s$ term by adding a Lasso regularizer to the original quantile loss and then minimizing the loss."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper aims to extend the Lasso-style analysis to the quantile regression setting, but the proving techniques seem to be relatively standard. As a result, the contribution may not meet the bars for a top Machine Learning conference. Additionally, the scope of the paper seems limited: the theoretical results are restricted to a realizable setting where the conditional quantile is modeled by a linear function, and the experiments are restricted to synthetic data. Furthermore, while the paper mentions transfer learning, the transfer aspect is limited to the assumption of $\\theta - \\theta_s$ being small. There is no explicit algorithmic design to address or leverage the distribution shift between the source and the target distributions."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "The paper addresses the quantile regression problem by leveraging existing pretrained models. We develop an quantile regression adapter via transfer learning and provide statistical guarantees regarding on its adaptation efficiency."
},
"_bibtex": {
"value": "@misc{\nyang2024a,\ntitle={A Provable Quantile Regression Adapter via Transfer Learning},\nauthor={Rushuai Yang and Aiqi Zhang and Chenjia Bai and Xiu Su and Yi Chen},\nyear={2024},\nurl={https://openreview.net/forum?id=zvoM1Wastw}\n}"
},
"abstract": {
"value": "Adapter-tuning strategy is an efficient method in machine learning that introduces lightweight and sparse trainable parameters into a pretrained model without altering the original parameters (e.g., low-rank adaptation of large language models). Nevertheless, most existing adapter-tuning approaches are developed for risk-neutral task objectives and the study on the adaptation of risk-sensitive tasks is limited. In this paper, we propose a transfer learning-based quantile regression adapter to improve the estimation of quantile-related risks by leveraging existing pretrained models. We also establish a theoretical analysis to quantify the efficacy of our quantile regression adapter. Particularly, we introduce a transferability measure that characterizes the intrinsic similarity between the pretrained model and downstream task in order to explain when transferring knowledge can improve downstream learning. Under appropriate transferability and structural assumptions, we establish error bounds for the estimation and out-of-sample prediction quality by our quantile regression adapter. Compared to vanilla approaches without transfer learning, our method is provably more sample efficient. Extensive numerical simulations are conducted to demonstrate the superiority and robustness of our method empirically."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": {
"value": [
"~Rushuai_Yang1",
"~Aiqi_Zhang1",
"~Chenjia_Bai2",
"~Xiu_Su1",
"~Yi_Chen18"
]
},
"authors": {
"value": [
"Rushuai Yang",
"Aiqi Zhang",
"Chenjia Bai",
"Xiu Su",
"Yi Chen"
]
},
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Transfer Learning",
"Adaptation",
"Quantile Regression",
"High-dimensional Statistics",
"Convergence Rate"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": {
"value": "yang|a_provable_quantile_regression_adapter_via_transfer_learning"
},
"pdf": {
"value": "/pdf/90a80974a7885697219256cd98c2d9c7872897b9.pdf"
},
"presentation": null,
"primary_area": {
"value": "transfer learning, meta learning, and lifelong learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "A Provable Quantile Regression Adapter via Transfer Learning"
},
"venue": {
"value": "ICLR 2025 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Withdrawn_Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||
zweyouirw7 | Spiking Transformer-CNN for Event-based Object Detection | main | Active | Event data;Object detection;Spike neural networks;Low power consumption;Transformer-CNN | applications to computer vision, audio, language, and other modalities | 3;3;3;5 | 5;4;4;4 | 2;2;2;2 | 2;2;2;2 | 2;2;2;2 | 3.5 | 4.25 | 2 | 2 | 2 | -0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "(1) The Spiking Transformer seems to reference previous work, but its innovation points are not prominent. Has the author made any corresponding contribution?\n(2) Can the author explain the issues addressed in the design for the SCB section, which is the CNN module section? And what effect does it have?\n(3) Please provide a detailed analysis of the problem solved in the feature fusion section?\n(4) Please analyze the overall network design logic. The current analysis tends towards module stacking."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "This paper proposes a novel hybrid architecture of Transformer and CNN based on pulse neurons, which has achieved good results in event object detection tasks. Among them, the author has made a reasonable design for the Transformer branch and CNN branch under the spikng neuron, and designed a feature fusion method that conforms to the hybrid architecture. The experimental part is relatively complete, with reasonable and clear composition."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This article proposes a novel SNN architecture combining Transformer and CNN for object detection. In the analysis process, the author noticed the different roles of Transformer and CNN and effectively combined them. However, in the introduction section, the author's logic seems to focus on describing the development process of SNN, rather than focusing on the problem at hand. Meanwhile, in the narrative of the contribution section, the author's innovative points are not highlighted and appear scattered. In the theoretical/methodological section, the author seems to have referenced previous work and made some iterations. However, the theoretical part did not reflect the author's contribution. This article is not mature and needs to be polished and edited to highlight its own contributions. It must be acknowledged that the author has achieved excellent results on the object detection dataset, however, this seems to depend on the number of module stacks. I suggest rejecting and polishing."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The introduction of this article does not highlight the innovative points. The combination of Transformer and CNN has been proven effective and maturely applied in ANN. The use of LIF neurons for Spiking is not an important innovation point. Furthermore, in the theoretical section, it is evident that both the Transformer architecture and CNN module were designed with reference to previous tasks, which does not constitute an important innovation point to support the paper. The subsequent feature fusion module described two types separately, but the differences between types and the issues addressed were not elaborated in detail. Reading through the methods section, the innovative points and logical description of the methods are similar to the module stacking without highlighting the author's own analysis."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. In your discussion section, you referenced the \"Integer-Valued Training and Spike-Driven Inference Spiking Neural Network for High-performance and Energy-efficient Object Detection.\" However, no comparative analysis was provided. Could you elaborate on the rationale behind this omission in your experimental evaluation?\n2. Regarding the notable performance decline on the 1Mpx dataset, please give an explanation."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This work is the first attempt to combine Spiking-Transformer and Spiking-CNN architectures for event-based object detection.\n2. Spike-TransCNN achieves competitive performance on the Gen1 dataset, whose mAP is 0.336 and energy consumption is 5.49 mJ.\n3. The visualization and graphical presentation are of high quality and clarity."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes Spike-TransCNN, a novel hierarchical architecture combining Spiking Transformers and Spiking Convolutional Neural Networks for event-based object detection. The work addresses the challenge of balancing detection accuracy and energy efficiency in event-based object detection."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The performance on the 1Mpx dataset (0.250) is significantly inferior to existing methods (0.483), without adequate explanation or analysis.\n2. Lack of comprehensive comparisons with recent state-of-the-art SNN detection methods, such as \"Integer-Valued Training and Spike-Driven Inference Spiking Neural Network for High-performance and Energy-efficient Object Detection.\"\n3. The whole architecture primarily transplants the established Transformer-CNN paradigm into the SNN domain, with limited Innovation.\n4. The paper exhibits an overreliance on descriptive language while lacking theoretical analysis for the performance improvements of the proposed architecture."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Could you provide a more detailed explanation or pseudo-code for the spike-driven token selection and feature fusion processes? It would be helpful to understand the precise mechanics of these operations within the architecture.\n2. Have you considered evaluating Spike-TransCNN on additional event-based datasets, possibly with larger or more complex scenes? If so, were there any particular challenges, and if not, could you discuss the potential limitations of the model’s generalizability?\n3. How does Spike-TransCNN compare with hybrid SNN-ANN models in terms of both accuracy and energy efficiency? Including such comparisons could help contextualize the advantages of your proposed model.\n4. Can you provide more insight into the sensitivity of the model to various hyperparameters, such as the number of time steps or membrane potential thresholds? An ablation study on these parameters might help to optimize the model further.\n5. Could you include side-by-side visual comparisons with other models to highlight Spike-TransCNN’s strengths, particularly in scenarios with occlusions or rapid movement? This might better illustrate the advantages of your architecture in challenging conditions.\n6. Some of the terms related to spiking mechanisms and attention mechanisms could be more consistently used. Would you consider revising these terms for clarity, especially to make the paper more accessible to readers from a broader audience?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The hierarchical integration of Spiking Transformer and CNN blocks is well-motivated and effectively leverages the strengths of each architecture, leading to improvements in both accuracy and energy efficiency.\n2. The paper provides robust experimental validation, including comparisons with state-of-the-art methods, and energy efficiency analyses, showcasing the advantages of Spike-TransCNN over conventional ANN-based methods.\n3. The focus on energy-efficient event-based object detection aligns with the needs of edge-computing applications, and the results demonstrate significant energy savings, which is a major contribution in neuromorphic computing."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a novel Spiking Transformer-CNN (Spike-TransCNN) architecture aimed at enhancing event-based object detection by combining the global information extraction capabilities of Spiking Transformers with the local feature extraction strengths of Spiking CNNs. This hybrid approach addresses current limitations in spiking neural networks (SNNs) for object detection, particularly in incorporating both global and multi-scale local features, and demonstrates promising results on the Gen1 dataset."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Although the hybrid architecture is described in detail, the explanations of specific processes, such as spike-driven token selection and intra- and inter-stage feature fusion, could be clearer. Including pseudo-code or flow diagrams might enhance the reader’s understanding of the model’s operation.\n2. While the results on the Gen1 dataset are compelling, it would strengthen the paper to evaluate the model on more diverse datasets, particularly larger or more complex event-based datasets, to demonstrate generalizability.\n3. The paper could benefit from a discussion on how Spike-TransCNN compares with hybrid SNN-ANN models, given their potential to balance energy efficiency and performance. This would contextualize the performance and energy efficiency gains of Spike-TransCNN more effectively.\n4. While there is an ablation study on some components, further exploration on the impact of various hyperparameters (e.g., number of time steps, membrane potential thresholds) could provide insights into optimizing the architecture for different applications.\n5. Consistent terminology, particularly around spiking mechanisms and attention mechanisms, would improve readability. Some abbreviations and terms could be clarified for non-specialist readers.\n6. The paper includes visualization results, but providing side-by-side comparisons with other models on challenging scenarios could offer a clearer view of the model’s strengths in handling occlusions and motion."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. While the paper's primary contribution lies in integrating global information from Spiking Transformer with local information from Spiking CNNs, the motivations for its various modules lack coherent alignment with this central objective.\n2. The paper would benefit from additional ablation studies and in-depth analysis of the STS module.\n3. The authors are recommended to carefully review the paper for grammatical accuracy and logical coherence."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper proposes the Spike-TransCNN model, successfully integrating global information from Spiking Transformer with local information from Spiking CNNs, which is beneficial for future development in this field.\n2. The paper introduces several interesting blocks, such as STS and SCB, which effectively improve the model's performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a hierarchical Spiking Transformer-CNN that effectively combines global and local information, successfully improving the performance of SNN-based object detectors while reducing power consumption."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The integration of global information from Spiking Transformer with local information from Spiking CNNs was first proposed in [1], rather than in this work.\n2. The used or proposed modules in this paper, including SSA, SCB, and both intra-stage and inter-stage spike feature fusion modules, fail to preserve the spiking characteristics of SNNs due to their incorporation of non-spiking computations. Consequently, Spike-TransCNN would be more accurately categorized as a hybrid ANN-SNN model rather than a pure SNN.\n3. The paper omits comparisons with other pure SNN models (such as SpikeYOLO [2]) and hybrid models (like EAS-SNN [3] and SpikingViT [4]). Furthermore, the model's mAP performance on GEN1 and 1Mpx datasets is substantially inferior to these state-of-the-art approaches.\n4. The motivation for proposing STS is unclear - why is it used in shallow layers instead of SSA?\n5. The motivation for proposing SCB is also inadequately explained - on lines 260 and 263, it merely states that it \"leverages the local feature extraction capabilities\" and \"captures local multiscale information\". However, this raises questions: Couldn't standard 3x3 and 5x5 convolutions achieve the same objective?\n6. The reported energy consumption raises significant concerns. Given that Spike-TransCNN has double the parameters of SFOD with similar firing rates, and extensively employs SPIKE-ELEMENT-WISE ADDITION operations (introducing non-spiking computations) across its modules, the claimed lower energy consumption compared to SFOD requires further justification.\n7. The paper contains formatting issues, specifically on line 382 where the Firing Rate and Energy (mJ) are overlapping.\n\n[1] Yao M, Hu J K, Hu T, et al. Spike-driven Transformer V2: Meta Spiking Neural Network Architecture Inspiring the Design of Next-generation Neuromorphic Chips[C]//The Twelfth International Conference on Learning Representations.\n[2] Luo X, Yao M, Chou Y, et al. Integer-Valued Training and Spike-Driven Inference Spiking Neural Network for High-performance and Energy-efficient Object Detection[J]. arXiv preprint arXiv:2407.20708, 2024.\n[3] Wang Z, Wang Z, Li H, et al. EAS-SNN: End-to-End Adaptive Sampling and Representation for Event-based Detection with Recurrent Spiking Neural Networks[J]. arXiv preprint arXiv:2403.12574, 2024.\n[4] Yu L, Chen H, Wang Z, et al. Spikingvit: a multi-scale spiking vision transformer model for event-based object detection[J]. IEEE Transactions on Cognitive and Developmental Systems, 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024spiking,\ntitle={Spiking Transformer-{CNN} for Event-based Object Detection},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zweyouirw7},\nnote={under review}\n}"
},
"abstract": {
"value": "Spiking Neural Networks (SNNs) enable energy-efficient computation through event-driven computing and multiplication-free inference, making them well-suited for processing sparse events. Recently, deep Spiking Convolutional Neural Networks (CNNs) have shown energy efficiency advantages on event-based object detection. However, spiking CNNs have been limited to local and single-scale features, making it challenging for them to achieve better detection accuracy. To address this challenge, we propose a hierarchical Spiking Transformer-CNN (i.e., Spike-TransCNN) architecture, which is the first attempt to leverage the global information extraction capabilities of Spiking Transformers and the local information capture abilities of Spiking CNNs for event-based object detection. Technically, we first propose using the Spiking Transformer to extract global features and employ a multi-scale local feature extraction CNN module to complement the Spiking Transformers in local feature extraction. Then, we design intra-stage and inter-stage feature fusion modules to integrate global and multi-scale local features within the network architecture. Experimental results demonstrate that our Spike-TransCNN significantly outperforms existing SNN-based object detectors on the Gen1 dataset, achieving higher detection accuracy (mAP 0.336 vs. 0.321) with lower energy consumption (5.49 mJ vs. 7.26 mJ). Our code can be available in the supplementary materials."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Event data",
"Object detection",
"Spike neural networks",
"Low power consumption",
"Transformer-CNN"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/0fe9f423832d48df3a3b6736c33825f974638ea3.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/19d8dfc468a31350942757182a0da2cd88d0d03f.zip"
},
"title": {
"value": "Spiking Transformer-CNN for Event-based Object Detection"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zwuemuTiN8 | TACD-GRU: Time-Aware Context-Dependent Autoregressive Model for Irregularly Sampled Time Series | main | Active | Time series models;Irregularly sampled time-series;Autoregressive models;Recurrent neural networks | learning on time series and dynamical systems | 3;5;5;6 | 4;3;4;3 | 1;2;3;3 | 1;2;2;2 | 3;2;3;3 | 4.75 | 3.5 | 2.25 | 1.75 | 2.75 | -0.688247 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See the #Weakenesses section."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is well-structured and clearly written, making it accessible to readers. \n\n2. The TACD-GRU model presents a straightforward approach by combining context-based and attention-based predictions.\n\n3. The paper includes extensive empirical evaluations across multiple real-world datasets, demonstrating the model's superior performance compared to baselines."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposes a dual model named TACD-GRU, which combines long-term context and short-term last observations for irregularly sampled multivariate time series prediction. For long-term context, TACD-GRU uses GRU to update the hidden state given an observed value and state decays with time between observations. For short-term observations, an attention mechanism is used to summarize last observations over all variates. These two prediction are combined by the weight output by an MLP. Single- and multi-step experiments on three real-world datasets demonstrate the effectiveness of the proposed model."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Despite the simplicity, the context-based part is lack of novelty. The use of GRU to model abrupt changes and continuous processes to model slow variations between observations has been extensively studied in previous works[1,2]. Espscially for event sequence works[3,4], which exaclty use GRU and exponential decay.\n\n[1] Neural Jump Stochastic Differential Equations. In NeurIPS-2019.\n\n[2] GRU-ODE-Bayes: Continuous modeling of sporadically-observed time series. In NeurIPS-2019.\n\n[3] The Neural Hawkes Process: A Neurally Self-Modulating Multivariate Point Process. In NeurIPS-2017.\n\n[4] Neural Relation Inference for Multi-dimensional Temporal Point Processes via Message Passing Graph. In IJCAI-2021.\n\n2. This paper lacks an evaluation of efficiency and complexity. Since the proposed method is based on RNNs, it cannot be parallelized, and the attention mechanism grows quadratically with the number of variates. Therefore, such analysis and evaluation are necessary."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1) Please execute more repeats when it is necessary! (see above)\n\n2) \"One limitation of ODE-based methods is that it reaches to a solution as a function of initial condition. However, the initial condition cannot be adapted to the observed distribution. NeuralControlled Differential Equations Kidger et al. (2020) is proposed to address this limitation. \" ← I do not really understand the statement here? In any case the initial condition can be taken as a variable and back propagated to. In case of Neural control DEs as well as in case of GRU-ODE-Bayes jumps are possible so observational information can overwrite the past and make the initial condition irrelevant. What is special about NCDEs? Can youl clarify?\n\t\n3) How this should be interpreted : xMSE (×10−2) and MAE (×10−2) ?\n /The values in the table are already multiplied by 10-2 so proper value is say 0.020 -> 2.0, or the content of the table have to be multiplied with 10-2 so the proper value is 0.02 -> 0.0002 ?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Handling sporadically observed time series is an important research topic that helps unlocking the full potential of time series modeling for practical problems. The present method combines attention with RNN based model and time decaying hidden states. The paper provide detailed ablation studies to show the benefits of these components and their combination."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a new model for the Irregularly sampled multivariate time series prediction task combining the benefit of RNNs and attention mechanism. The GRU-based asynchronous RNN unit forms the context based model while attention over the last observations focuses on short term dependencies."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Some of the standard deviations are extremely high in the tables, it is clear that executing only 3 repeats is not enough.\nIn case of shortage of available computational power, please at least execute more repeats on cases when there is extreme std (within an order of magnitude to the mean) and/or they are in the TOP of the list.\nJust for example: On Table 1 you cannot bold Contiformer in the first column without bolding RKN-Delta_t and mTAND as well as it stands now.\n\nAppendix B mentiones that special effort was made to stay comparable with Schirmes et al.\nIt is very hard to compare the evaluation of different papers even if it was carried out on the same dataset with same methods. I quickly checked the intersection of methods on the present work, Schirmes et al. and De Brouwer et al.\n8 method was tested in common with Schirmes et al. \n\n| MODEL | Present paper 1step | Present paper multistep | De Brouwer et. al. | Schirmes et al. Extrapolation |\n|------------|---------------------|-------------------------|--------------------|-------------------------------|\n| Latent ODE | 0.6 (0.2) | 152.3 (1.7) | 0.96 (0.11) | 203.4 (0.5) |\n| GRU-D | 1.5 (0.9) | 142.8 (19.8) | 0.53 (0.06) | 171.8 (1.5) |\n| GRU-ODE-B | n.a. | n.a. | 0.43 (0.07) | 543.7 (102.0) |\n| GRU | n.a. | n.a. | 0.75 (0.12) | 207.1 (1.51) |\n| ODE-RNN | 1.9 (1.7) | 172.4 (1.9) | n.a. | 195.5 (46.6) |\n| CRU | 3.0 (1.9) | 136.1 (27.5) | n.a. | 127.3 (6.6) |\n| f-CRU | 2.0 (0.7) | 161.1 (5.1) | n.a. | 156.9 (32.1) |\n| mTAND | 0.9 (0.6) | 159.3 (1.7) | n.a. | 236.0 (3.8) |\n| RKN-dT | 0.6 (0.4) | 146.1 (21.9) | n.a. | 149.1 (27.2) |\n| GRU-dT | 3.5 (0.1) | 170.1 (7.0) | n.a. | 208.1 (5.4) |\n\n(Here i assumed the most likely interpretation of “MAE (×10−2) ”.\n\nJust as a quick check I calculated Spearman rank correlatinon of the 2nd and 4th column, and I got 0.57, with not significant (0.1389) p-value. So without more standardization of the experiments, we cannot even decide the order between methods (on the same dataset!). I have to note here that it is not the job of the present authors to solve this issue of the field, but currently the best I can assume is that the method is in the legue of SOTA.\n\nRelative to Schirmes et al, you do not compare to RKN, GRU and GRU-ODE-Bayes. Why leaving out this three? IN case of GRU and RKN we can argue that they tend to be always worse than the $\\Delta t$ versions.\n\nMinor:\nClarify notation, for example in Eq. 9, the formula for $g_{t, \\Delta T}$ contains $\\Delta\\tau$\n\n“the most set of recently observed values for all variables,” -> set of most recently?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. why use 'prediction' rather than 'forecasting' as the term?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper tackles an important problem in time-series analysis, as current methods for time-series analysis are mainly based on the assumption of continuous and regular interval observations.\nThe paper proposes the design of a combination of modeling both temporal and long-term dependency, which could potentially enhance performance in complex time-series scenarios. \nThe experiment results look nice and insightful, especially the visualizations."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposed a refined RNN-based model designed to predict irregularly sampled multivariate time series. The proposed architecture combines a context-based to capture the long-term dependency and a last-observation-based model that focuses on short-term temporal patterns."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper lacks a well-organized structure does not read like a cohesive written paper and does not highlight their motivation and objectives. For example, the way authors describe their methods in the introduction. Also, the module names in intro, method and results are also not consistent. \n2. It reads like the paper is just a combination of two modules that lack novelty. \n3. The limitation of current literature in multivariate irregular time series is not clear and seems not relevant to what these authors are trying to solve. \n4. Miss the important baseline and discussion of irregular time series forecasting problems[1]. \n\n\n[1]Zhang, W., Yin, C., Liu, H., Zhou, X. & Xiong, H.. (2024). Irregular Multivariate Time Series Forecasting: A Transformable Patching Graph Neural Networks Approach. <i>Proceedings of the 41st International Conference on Machine Learning</i>, in <i>Proceedings of Machine Learning Research</i> 235:60179-60196 Available from https://proceedings.mlr.press/v235/zhang24bw.html."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "In addition to the Weaknesses mentioned above\n\n**Q1.** Often the datasets considered here are medical and climate which can have seasonalities. (How) does TACD-GRU incorporate seasonality in the dataset?\n\n**Q2.** Is ${\\Delta \\tau} = (\\tau_t - \\tau_{t-1})\\cdot \\mathbb{1}$ where $\\mathbb{1}$ is a column vector with all 1s (lines: 236)\n\n**Q3.** What will be ${x_t^*}$ for $t=0$ (initial value). Assuming it to be a zero vector, can we differentiate observation with a 0 value from the initial value?\n\n**Q4.** While two evaluation metrics, MSE and MAE, are used, for what metric is the loss optimized?\n\n**Q5.** In 4.2 (Model training) it was mentioned that the entire sequence is reconstructed. Is reconstruction loss an auxiliary loss to the forecasting loss or both losses are treated in the same manner?\n\n**Q6.** In Algorithm 1, what is the difference between $\\Delta t$ and $\\Delta T$"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "**S1.** The paper addresses the significant yet under-explored topic of forecasting irregularly sampled time series with missing values. The paper is well-written and easy to follow\n\n**S2.** The approach of forecasting using two complementary modules—context-based and last-observation-based—is interesting\n\n**S3.** The experiments adhered to the existing protocol, and the results demonstrate that the proposed model is promising. In the appendix and code, hyperparameters used for all the competing models are provided\n\n**S4.** A new irregularly sampled time series forecasting dataset from MIMIC-III is useful for the community to further research in this field"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Forecasting irregularly sampled time series with missing values is a critical yet under-researched area due to the inherent complexities of dealing with both irregular sampling and missing data. This paper introduces a novel model, TACD-GRU, an RNN-based approach designed to forecast irregularly sampled time series with missing values. TACD-GRU incorporates two key mechanisms: (1) a context-aware component that learns from historical observations across the entire timeline, and (2) a time-aware component that focuses on the most recent observations (the last observation). The model was evaluated against a wide range of baselines across three datasets, demonstrating promising results."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**W1.** Some important literature is missing ([1], [2]). These are graph (attention)-based models designed for forecasting irregularly sampled time series with missing values\n\n**W2.** In terms of modeling, using a two-component model is not new. I see some similarities between SIMTSC [3] and TACD-GRN. SIMTSC also has two components: 1) learning from the context using an RNN, and 2) learning from recent observations (using a GNN). I agree that the specific implementation of each component is different in both models and they are applied in different contexts, but, in my humble opinion, it is necessary to highlight the similarities and distinguish the differences.\n\n**W3.** There are existing datasets from MIMIC-III and MIMIC-IV used in GraFITi [1], GRU-ODE-Bayes and Neural Flows [3]. Instead of using these, why did the authors create a new dataset based on MIMIC-III?\n\n**W4.** Considering the recurrent nature of the model, how efficient is the training process? Could authors provide the runtime and/or evaluation time comparison?\n\n\n\n**References:**\n1. Yalavarthi, Vijaya Krishna, et al. \"GraFITi: Graphs for Forecasting Irregularly Sampled Time Series.\" AAAI 2024\n2. Zhang, Weijia, et al. \"Irregular Multivariate Time Series Forecasting: A Transformable Patching Graph Neural Networks Approach.\" ICML 2024\n3. Biloš, Marin, et al. \"Neural flows: Efficient alternative to neural ODEs.\" NeurIPS 2021\n4. Zha, Daochen, et al. \"Towards similarity-aware time-series classification.\" SIAM SDM 2022\n\n**Minor:**\n- **MW1.** For better clarity, I suggest using $x_{t, \\Delta t}$ to represent $x_{t + \\Delta t}$ (line 50), and similarly $g_{t, \\Delta t}$ for $g_{t + \\Delta t}$ (line 279)\n- **MW2.** The legend and labels in Figures 2 and 3 are difficult to read"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024tacdgru,\ntitle={{TACD}-{GRU}: Time-Aware Context-Dependent Autoregressive Model for Irregularly Sampled Time Series},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zwuemuTiN8},\nnote={under review}\n}"
},
"abstract": {
"value": "Multi-variate time series data and their models are extremely important for understanding the behavior of various natural and man-made systems. Development of accurate time series models often requires capturing intricate relationships among the variables and their dynamics. Particularly challenging to model and learn are time series with irregular and sparse observations, that may arise in domains as diverse as healthcare, sensor and communication networks. The irregular sampling in these time series violates a key assumption of most existing models, which expect observations at regular intervals. In this work, we propose and study TACD-GRU: a Time-Aware Context Dependent Gated Recurrent Unit architecture for multi-variate time series prediction that accounts for irregularities in observation times of individual time series variables and their dependencies. Our model defines a novel recurrent unit that is triggered by the arrival of a new observation to update its state, and to support variable value predictions at any future time. TACD-GRU's prediction module dynamically combines two complementary prediction models: (i) context based model that captures long-term dependencies, and (ii) last observation based model that focuses on short-term temporal patterns. Our proposed model shows superior performance over existing state-of-the-art (SOTA) models on both single-step and multi-step prediction tasks across three diverse real-world datasets. We provide additional empirical evidence to highlight the effectiveness of TACD-GRU's individual components in capturing complex temporal dynamics in irregularly sampled data."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Time series models",
"Irregularly sampled time-series",
"Autoregressive models",
"Recurrent neural networks"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/4421235ad97ccfe5ca2fc524ff78dca1ef7e9e31.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on time series and dynamical systems"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/673005a6bc654c0214925ac296e8a39113f18c51.zip"
},
"title": {
"value": "TACD-GRU: Time-Aware Context-Dependent Autoregressive Model for Irregularly Sampled Time Series"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zxO4WuVGns | Inverse decision-making using neural amortized Bayesian actors | main | Active | Bayesian actor models;perception and action;cognitive science;Bayesian inference;inverse modeling | applications to neuroscience & cognitive science | 3;3;6 | 2;3;2 | 3;2;3 | 2;2;4 | 3;2;4 | 4 | 2.333333 | 2.666667 | 2.666667 | 3 | -0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- In figure 2A, could you clarify what is $r^{\\ast}$? Should it be $a^{\\ast}$ instead?\n\n- The top left panel in figure 2B is missing a label. Should it be $\\sigma_0$?\n\n- In figure 2C, it would be useful to include separate x-axis labels for the analytical and nn cases.\n\n- The caption for figure 3B uses $\\beta$ as cost asymmetry parameter, but all figure labels use $\\alpha$. Are they the same?\n\n- In figures 3B and 3C, it would be helpful to make the ranges of the axes the same in all panels. \n\n- Figure 4b is difficult to follow, including a legend would be very helpful."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "This paper address an important bottleneck in inverse decision-making by amortizing the agent's behavior using a neural network. This enables efficient Bayesian inference over the subject's behavioral model parameters. The experiments on synthetic data validate the approach through comparison with analytical solutions. The discussion on identifiability and the experiment design recommendations add valuable practical insights."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a method for performing Bayesian inference on the parameters of Bayesian observer-actor models, particularly suited for scenarios where Bayesian decision-making can be computationally intractable. The approach leverages a neural network to amortize the decision-making process of the subject by training the network to minimize the expected task-relevant cost with respect to the posterior over latent states and the action distribution. This setup allows for efficient, gradient-based inference of parameters from behavioral data. The authors validate their approach on synthetic data, highlighting its effectiveness and also discuss identifiability issues with recommendations to mitigate them. They further illustrate the method's applicability to human behavioral data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "In this work, the proposed approach aims to infer what a subject’s decisions were optimal for. However, there is still an assumption of optimal behavior, which may not always hold in real-world scenarios. Factors such as suboptimal learning or changing task demands can lead to deviations from optimality. Even if these deviations could potentially be reframed as an alternative optimality criterion, doing so would introduce additional identifiability challenges. It would be beneficial to discuss the limitations of this assumption.\n\nAnother potential limitation is that the use of the reparameterization trick requires a specific form of action distribution, which may restrict the model’s adaptability to diverse datasets and tasks where this distributional form does not apply.\n\nFinally, the presentation could be improved. Several figures lack clear labels and legends, making them difficult to interpret, and acronyms are introduced without prior definition. A revised presentation with attention to these details would enhance the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "How scalable is the current approach? What are the computational requirements for training the neural networks for more complex cognitive reasoning tasks?\n\nCould the authors provide more details about the choice of network architecture and hyperparameters?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper provides an innovative approach by using a neural network to approximate the Bayesian model for inverse inference, which traditionally faces computational intractability issues. Their neural network method, trained in an unsupervised manner, enables efficient inference of decision-making parameters without relying on closed-form solutions or restrictive assumptions (like Gaussian distributions or quadratic costs).\n\nClear problem formulation and motivation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper addresses the challenge of using Bayesian models to infer decision-making parameters (inverse decision making) from behavioral data, especially for tasks involving continuous actions where traditional Bayesian methods struggle with computational intractability. \n\nThe authors propose a new method where a pre-trained neural network, trained unsupervisely, was used to approximate an actor model’s parameter. The gradient-based Bayesian inference makes the method relatively efficient. This approach shows promising alignment with analytical solutions where they exist and effectively models human behavioral data in various sensorimotor tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The authors mentioned that their method could be applicable to a large number of tasks involving continuous responses, including economic decision-making, psychophysical production and crossmodality matching. However, the authors only tested their method on sensorimotor tasks. Testing methods on a diverse set of tasks involving continuous responses would significantly strengthen the paper.\n\nThe authors acknowledge that this method is currently constrained to relatively straightforward perceptual models. Extending it to more complex tasks (such as those involving circular variables or advanced cognitive reasoning) remains a limitation in its current form."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the Weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The writing is clear, and the core ideas are well articulated. \n2. This paper introduces a novel approach for Bayesian inference about the parameters of Bayesian actor models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper considers an important and fundamental problem of inferring priors and costs from behavior. Typically, the inverse decision-making problem is intractable. Therefore, the author approximate the solution with a neural network, and show that the ground truth can be recovered well on simulated dataset. The author also explore the human behavioral data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The most significant concern is the lack of experimental advancements. This work only presents experimental results from numerical simulations and some simple human behavioral dataset, where simple MLP is able to recover posterior distributions. Presumably, the algorithm proposed by the author will face several challenge when we have to due with high-dimensional input.\n1. It might be hard to train $f_{\\psi} (\\theta, m)$ when $\\theta$ contains more than 100M parameters.\n2. The HMC might not have the property of rapid mixing."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We developed an efficient Bayesian inference methods for priors, uncertainties, and costs from behavior by amortizing Bayesian actor models using neural networks."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024inverse,\ntitle={Inverse decision-making using neural amortized Bayesian actors},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zxO4WuVGns},\nnote={under review}\n}"
},
"abstract": {
"value": "Bayesian observer and actor models have provided normative explanations for many behavioral phenomena in perception, sensorimotor control, and other areas of cognitive science and neuroscience. They attribute behavioral variability and biases to interpretable entities such as perceptual and motor uncertainty, prior beliefs, and behavioral costs. However, when extending these models to more naturalistic tasks with continuous actions, solving the Bayesian decision-making problem is often analytically intractable. Inverse decision-making, i.e. performing inference over the parameters of such models given behavioral data, is computationally even more difficult. Therefore, researchers typically constrain their models to easily tractable components, such as Gaussian distributions or quadratic cost functions, or resort to numerical approximations. To overcome these limitations, we amortize the Bayesian actor using a neural network trained on a wide range of parameter settings in an unsupervised fashion. Using the pre-trained neural network enables performing efficient gradient-based Bayesian inference of the Bayesian actor model's parameters. We show on synthetic data that the inferred posterior distributions are in close alignment with those obtained using analytical solutions where they exist. Where no analytical solution is available, we recover posterior distributions close to the ground truth. We then show how our method allows for principled model comparison and how it can be used to disentangle factors that may lead to unidentifiabilities between priors and costs. Finally, we apply our method to empirical data from three sensorimotor tasks and compare model fits with different cost functions to show that it can explain individuals' behavioral patterns."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Bayesian actor models",
"perception and action",
"cognitive science",
"Bayesian inference",
"inverse modeling"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/8833b5713be44035257a772c3854d717eba56152.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to neuroscience & cognitive science"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/64ea835d5bd0f99a40d9971fb705aeef02351558.zip"
},
"title": {
"value": "Inverse decision-making using neural amortized Bayesian actors"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zxbQLztmwb | Emergent Symbol-Like Number Variables in Artificial Neural Networks | main | Active | mechanistic interpretability;numeric cognition;causal interventions;DAS | interpretability and explainable AI | 3;3;5;6 | 3;4;4;2 | 3;3;2;3 | 2;2;2;3 | 3;3;2;4 | 4.25 | 3.25 | 2.75 | 2.25 | 3 | -0.522233 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Do you think there is a reason networks should generally tend to learn the Up-Down algorithm rather than the others?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper furthers the body of work on interpreting how transformers complete mathematical or symbolic tasks\n\n2. The paper is well-written, and the motivation is clearly conveyed"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores how transformer models implement a counting algorithm to predict sequences in which a token should be repeated an equal number of times to its occurrences in the prompt. They propose several candidate algorithms: counting up from zero to the target quantity, counting down from the target quantity to zero, and summing 1 or -1 for each token in the context. They analyze the model using Distributed Alignment Search and find that the most aligned algorithm is counting down from the target quantity to zero."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Novelty concerns: it does not seem like this paper advances the Pareto frontier of interpretability in transformer models. The task studied is very simple and involves repeating a token the same number of times as it occurred in the prompt, and though the introduction makes an attempt to distinguish it as a special study of symbolism that has not been previously explored, similar analyses exist in a wide range of the literature, on much more complicated tasks than this one.\n1. The literature contains extensive study of transformers trained on mathematical tasks, such as modular addition (https://arxiv.org/abs/2301.05217, https://arxiv.org/abs/2306.17844), addition (https://arxiv.org/pdf/2310.13121), induction (https://arxiv.org/abs/2209.11895), relational reasoning (https://arxiv.org/pdf/2310.09753), etc. The cited Feng and Steinhardt paper (https://arxiv.org/pdf/2310.17191) is also a great example. These papers consider tasks more complex than the one considered in this paper, most of which also implicitly involve storing important properties of the prompt and representing numbers in a symbolic way, and in my view it's hard to see how the simple task considered in this paper somehow has properties that reveal a unique capability in transformer reasoning that these papers do not. \n2. These papers also typically contain various methodological insights on how to perform interpretability, or analysis of training dynamics and \"how\" the models learned good algorithms, and which types of algorithms they are likely to learn, while this paper simply applies DAS, a technique from prior work, and does not contribute any new methods. This paper would benefit from a study of whether their \"up-up\" or \"up-down\" is more likely to be learned by a neural network for some systematic reason involving inductive biases, not just for the single network being studied."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- In addition to alignment accuracy, are there other methods to verify that the programs learned by the models are identical to those described in the paper? \n- Numerous alternative programs could assist the model in solving counting tasks (as discussed in [3, 4]),why necessarily Up-down or Ctx-Distr? \n- If theoretical support is lacking, additional experimental evidence through advanced alignment analyses or larger model evaluations would be highly valuable [7].\n\n[7] Wu, Zhengxuan, Atticus Geiger, Thomas Icard, Christopher Potts, and Noah Goodman. \"Interpretability at scale: Identifying causal mechanisms in alpaca.\" Advances in Neural Information Processing Systems 36 (2024)."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- Novel application of DAS to study the symbolic counting behaviors of RNNs and Transformers\n- The experiments present interesting results to verify the hypothesis"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates whether neural networks, specifically GRUs, LSTMs, and Transformers, can develop symbol-like number representations through next-token prediction (NTP) tasks. Using numeric equivalence tasks as a foundation, the authors apply causal analysis, notably Distributed Alignment Search (DAS), to explore whether the models’ hidden representations align with symbolic counting behaviors, expressed by 3 hypothetical programs: Up-Down, Up-Up, and Ctx-Distr. Experimental results conclude that RNNs mimic symbolic counting by incrementing or decrementing a variable; Transformers compute context step-by-step without cumulative counter states; and larger models produce less graded alignment than larger ones."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The idea that symbol-like variables can emerge purely from next-token prediction objectives in neural networks is not surprising. Given the success of large language models (LLMs) trained with next-token prediction (NTP), numerous recent studies have empirically and theoretically validated NTP's effectiveness as a universal learning approach for many tasks including numerical reasoning [1,2]. \n- The paper merely highlights the alignment between the hypothesis program and the neural network representations, which does not strongly guarantee that this is the actual program the models follow. Recent research on symbolic emergence in RNNs, including studies on grammatical structures and counting without explicit symbol training, demonstrates that these neural architectures often simulate symbolic programs [3, 4]. Compared to this paper, previous works are more theoretically rigorous.\n- Similarly, the paper's findings on the counting capabilities of Transformers are superficial. Recent studies have provided a more comprehensive and systematic analysis of this ability [5], which diminishes the paper's overall contribution.\n- The paper focuses solely on a basic counting task, which weakens its broader claim of discovering \"symbol-like variables.\" To substantiate this claim, more complex symbolic tasks should be explored [6].\n\n[1] Bubeck, Sébastien, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee et al. \"Sparks of artificial general intelligence: Early experiments with gpt-4.\" arXiv preprint arXiv:2303.12712 (2023).\n\n[2] Malach, Eran. \"Auto-regressive next-token predictors are universal learners.\" arXiv preprint arXiv:2309.06979 (2023).\n\n[3] Weiss, Gail, Yoav Goldberg, and Eran Yahav. \"On the practical computational power of finite precision RNNs for language recognition.\" arXiv preprint arXiv:1805.04908 (2018).\n\n[4] El-Naggar, Nadine, Andrew Ryzhikov, Laure Daviaud, Pranava Madhyastha, and Tillman Weyde. \"Formal and empirical studies of counting behaviour in ReLU RNNs.\" In International Conference on Grammatical Inference, pp. 199-222. PMLR, 2023.\n\n[5] Behrens, Freya, Luca Biggio, and Lenka Zdeborová. \"Understanding counting in small transformers: The interplay between attention and feed-forward layers.\" arXiv preprint arXiv:2407.11542 (2024).\n\n[6] Le, Hung, Dung Nguyen, Kien Do, Svetha Venkatesh, and Truyen Tran. \"Plug, Play, and Generalize: Length Extrapolation with Pointer-Augmented Neural Memory.\" Transactions on Machine Learning Research, 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Regarding the non-zero dimensions in D: How significantly does this hyperparameter impact the results? \n\n(PS:I am not familiar with this field, though I have a relatively strong understanding of the paper’s content and personally appreciate it—I believe the clarity of this paper’s description allows even those unfamiliar with the topic to gain valuable insights. However, since I may not have read some of the most relevant related research and cannot assess whether the related work is sufficiently comprehensive, I am not in a position to accurately gauge the contribution level of this work. I trust that the Area Chair will make an appropriate decision.)"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The motivation of the paper is reasonable, and the description is very clear, making the thought process easy to follow. It’s worth mentioning that although I am not familiar with the field of causal abstraction, reading the paper and reviewing the related work allowed me to gain a general understanding of the field and appreciate the contributions of this work, which is commendable.\n\n2. The experimental design appears to be reasonable and thorough. It identifies different characteristics of symbolic program approximation across various model architectures and analyzes the impact of model size and intervention magnitude on IIA."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates how neural networks, specifically recurrent networks and Transformer architectures, can generate symbol-like numeric representations purely from the next-token prediction (NTP) objective. The study reveals that recurrent networks tend to align closely with symbolic counting processes, while Transformers approach numeric tasks by recomputing quantities contextually at each step. Additionally, these symbol-like representations exhibit a graded nature and are influenced by task structure and model size."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Although the content of the paper is acceptable to me, I feel a bit disappointed that the paper only briefly mentions leaving more complex tasks and larger models for future research. I am unsure whether the current research approach has enough potential for further expansion.\n\n2. As stated in Section 4.3, the reason why the Same-Object task causes poorer alignment in recurrent neural networks compared to both the Single-Object and Multi-Object tasks is unclear."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. On line 140, what is the “Phase variable to track quantity at each step in the sequence”? What does the word “quantity” refer to here? I’m assuming this variable tracks whether the T token has been encountered yet (as for the Up-Down Program), but the sentence is confusing and doesn’t seem to indicate that.\n2. Why is DAS in the Transformer case only applied to the hidden states between the two Transformer layers, as opposed to within things like MLP hidden states?\n3. Why only investigate the presence of a single program variable? For instance, in the Up-Down Program, why not try to find and causally manipulate the representation of both the count variable and the phase variable (wee “weakness” above)?\n4. Regarding Section 4.4, the proposed explanation for why representations of large numbers are less precise implies that task performance should also be worse at these larger numbers. Is there evidence of this? Despite the fact that all models achieve 99.99% accuracy, perhaps you can still check if it is true at earlier points in training."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The work is, for the most part, easy to follow. I understood the tasks, methods (although seem points below on DAS), and results.\n2. The core claims appear to be well-supported.\n3. I like the approach of using methods from causality and training using next-token prediction as opposed to more explicit pressure for symbolic number representations. The advantages of these methods are clearly articulated."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper investigates the emergence of symbol-like number variables in the representations of sequence models trained on next-token prediction. The task is such that latent number variables are presumably required to solve it in the general case, but such variables are never given as explicit supervision targets and must be learned purely from the next-token prediction task pressure. This means that the findings have particular relevance to, for instance, language models. The methods used to investigate the presence or absence of number variables are taken from causality and mechanistic interpretability. The main finding is that simple RNNs do indeed appear to learn the correct symbol-like number variables on most tasks, albeit in an imperfect way.\n\nMy current leaning is to reject the paper, but I am willing to increase my score to a marginal accept if weaknesses 1-5 (along with my questions) are all addressed in the paper or satisfactorily rebutted."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. As far as I can tell, the DAS interventions only really test within-distribution interventions. Despite some target quantities being held out, these are basically holes within an observed range of integers. Why not see if the model generalizes to numbers outside this range (e.g., 30) and at what value things begin to break down? Also, why not apply DAS for both the count variable and the phase variable in the Up-Down program, and then test to see if the model can be causally manipulated in more interesting ways (e.g., flipping the phase variable several times within the same sequence, even though at training time it only ever flips once and in one direction). These sorts of more aggressive causal interventions are important to understand the degree to which the learned function is symbol-like, since a hallmark of symbolic programs is their systematicity.\n2. The explanation of DAS is, in my opinion, not written clearly. I understand that this was not the contribution of the current work, but given its centrality it should be explained with more intuition (including with a figure). If I had not already been familiar with DAS prior to reading this paper, I think a lot of it would have gone over my head.\n3. Figure 3 right panel seems not to provide much interesting information, and I would suggest removing this result. It is obvious that increasing model size will increase task performance, and also obvious that all accuracy metrics (including symbolic alignment) should more or less improve over the course of training. Figure 3 left panel seems completely sufficient on its own to show the results of Section 4.2.\n4. That the Same-Object task should yield unaligned models seems completely mysterious to me. If anything, I would have expected that models trained on Same-Object would be most aligned among the 3 tasks. Given that this paper is about mechanistic interpretability with respect to symbolic algorithms, I think it is extremely important that the strange behaviour for the Same-Object task, in which no symbolic algorithm is aligned, be explained.\n5. Some ungrammatical sentences and typos peppered throughout the text (e.g., line 169, line 190, line 246, etc.) need to be corrected. Please proofread again for clarity.\n6. The work lacks a bit of ambition, in my opinion. If one is going to investigate alignment to symbolic programs (which has already been done before in other settings, such as supervised learning), then why not go for more complex tasks which have more complex symbolic algorithmic solutions, with more variables that one could check alignment with respect to? I understand that this was intended as a proof of principle, but honestly the proof of principle in very simple tasks like these seems at once reminiscent of other work I have seen at various conferences, and does not tell me much about whether these sorts of results scale in practice to more complex reasoning tasks. In sum, the results seem to be a bit of a low-hanging fruit, and I would have liked to see a more ambitious project investigating alignment to more complex symbolic programs."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We find symbol-like variables in ANNs using causal interpretability methods, we also find differences between recurrent and attention based solutions."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024emergent,\ntitle={Emergent Symbol-Like Number Variables in Artificial Neural Networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zxbQLztmwb},\nnote={under review}\n}"
},
"abstract": {
"value": "Symbolic programs, defined by discrete variables with explicit rules and relations, often have the benefit of interpretability, ease of communication, and generalization. This is contrasted against neural systems, consisting of distributed representations with rules and relations defined by learned parameters, which often have opaque inner mechanisms. There is an interest in finding unity between these two types of systems for cognitive and computer scientists alike. There is no guarantee, however, that these two types of systems are reconcilable. To what degree do neural networks induce abstract, mutable, slot-like variables in order to achieve next-token prediction (NTP) goals? Can neural functions be thought of analogously to a computer program? In this work, we train neural systems using NTP on numeric cognitive tasks and then seek to understand them at the level of symbolic programs. We use a combination of causal interventions and visualization methods in pursuit of this goal. We find that models of sufficient dimensionality do indeed develop strong analogs of symbolic algorithms purely from the NTP objective. We then ask how variations on the tasks and model architectures affect the models' learned solutions to find that numeric symbols are not formed for every variant of the task, and transformers solve the problem in a different fashion than their recurrent counterparts. Lastly, we show that in all cases, some degree of gradience exists in the neural symbols, highlighting the difficulty of finding simple, interpretable symbolic stories of how neural networks perform their tasks. Taken together, our results are consistent with the view that neural networks can approximate interpretable symbolic programs of number cognition, but the particular program they approximate and the extent to which they approximate it can vary widely, depending on the network architecture, training data, extent of training, and network size."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"mechanistic interpretability",
"numeric cognition",
"causal interventions",
"DAS"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/2229b10ebdd7a84e50fe81d3027f42bcd033e8bb.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/0c7f4824fa063d51db440e69f048601a994acbe2.zip"
},
"title": {
"value": "Emergent Symbol-Like Number Variables in Artificial Neural Networks"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zxg6601zoc | Re-Imagining Multimodal Instruction Tuning: A Representation View | main | Active | Representation Tuning;Large Multimodal Models;Parameter-efficient Fine-tuning | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 3;6;6;6 | 5;3;3;4 | 2;3;3;3 | 2;3;3;3 | 2;3;3;3 | 5.25 | 3.75 | 2.75 | 2.75 | 2.75 | -0.870388 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see Weaknesses."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.\tThe overall writing is clear.\n2.\tThe idea is general and could be applied to many different applications. I think this would be of interest to people in the LMM community.\n3.\tThe proposed method is simple yet effective.\n4.\tThe experiments clearly show the improvement of the MRT over the PEFT."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a novel Multimodal Representation Tuning which can editing LMM representation and provide control. The paper introduces a representation editor $\\phi$ based on linear representation hypothesis and interchange interventions, which can apply to different representations in LMM. The overall writing is clear. The experiments cover the comparison between the MRT and the other PEFT methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThere are some typos. For example, ##hu2024bliva## in the \"Multimodal Instruction Tuning\" section of the related work.\n2.\tThe ablation study is insufficient. I would expect more ablation experiments, such as applying MRT only to Visual Representation and Cross-modality Representation.\n3.\tThere is too little theoretical analysis on why MRT is better than PEFT."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"Yes, Potentially harmful insights, methodologies and applications"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Could the authors elaborate on any plans to automate the rank-tuning process within MRT to simplify its application?\n2. Section 4.3 is particularly interesting to me. However, I think the attacker can easily break the controllability only by changing the order of the text instruction. Do you have any potential solutions and ideas on that?\n3. Instead of simple image classification, are there other qualitative examples of counterfactual controls (e.g., VQA)?\n4. Any analysis on which set of layers L to intervene on for visual / cross-modality / multimodal editor?\n5. What are some insights on fine-tuning only prefix/suffix tokens of textual embedding in the multimodal editor?\n6. Minor Errata\n 1. L84: hu2024bliva looks typo."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- MRT is intuitive, simple, and effective. It uses significantly fewer parameters while achieving strong results, making it suitable for resource-constrained applications.\n- The approach provides granular control over the representation editing, enabling counterfactual output generation that enhances interpretability.\n- The method is validated across multiple multimodal tasks, illustrating its effectiveness in diverse domains such as OCR, visual perception, and spatial reasoning.\n- The paper is well-written and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces Multimodal Representation Tuning (MRT), a parameter-efficient fine-tuning method to enhance controllability and interpretability in multimodal large language models (LMMs). MRT addresses the challenge of adapting LMMs effectively with fewer parameters by leveraging token-level multimodal representation control, achieving superior performance with up to 21 times fewer parameters than similar approaches. The authors explore and improve model behavior control through MRT, illustrating benefits in various multimodal perception and cognition benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The rank parameter is integral to MRT’s performance, but it currently requires manual tuning, which may limit practical adoption. While promising, the need for automated rank selection is highlighted as a limitation, suggesting a more autonomous rank-searching mechanism that could enhance usability.\n- The empirical performance is decent, but could you elaborate on MRT's main contribution compared to ReFT instead of extending the interchange intervention idea into MLLM?\n- Given its control over multimodal representations, MRT’s potential for misuse (e.g., manipulation of outputs) is acknowledged but not fully addressed in terms of mitigation strategies. Could you provide some examples of possible misuse of MRT utilizing controllability?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See Weaknesses"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Motivation is clear\n- Intuitive and simple idea that works well\n- Extensive analysis/ablation on the design choices"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a method for tuning large multi-modal models (LMM) in a efficient but effective way so that it can achieve similar performance to full fine-tuning, with an additional objective of having a controllability. The key idea of this paper is based on a prior technique that learns parameters that edits the representations. The main contribution of this paper is to make use of this technique for efficient tuning of LMMs, and the experiments show that this idea indeed is helpful and the paper did a good job of investigating the effect of various design choices."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Experiments on controllability is interesting but not conclusive. For instance,\n- What would happen if you don't use ROI and do train with all tokens?\n- What would happen if you fine-tune other baselines for this setup?\n- What would happen if you use a sentence that has a same semantic meaning to 'Is the object an e in the image?' but with different structure and words, after fine-tuning? Would the model still be controlled as intended?\n\nAdditional weaknesses are:\n- Figure 5 on the optimization landscape is interesting but I'm not sure how it is cherry-picked. Would there be a way to make this claim be supported by some metrics or more figures?\n- Main method section feels a bit redundant to me, not much of a difference between each subsection. It could be nice to think of a better way to re-structure, remove redundancy, and think of a way to clearly explain what differences exists\n\nNote: I'm not an expert in this area so I might be missing some experimental details. I'll check the other reviews on how the experimental setup is valid and how the results are good compared to other baselines."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. In Sec 3.2 Cross-modality Representation part, it says that the projector integrates representations from each layer of the visual encoder. Does this mean you combine vision features in different layers in the vision encoder as the final vision input for LLM?\n2. What is the reason for applying prefix and suffix editors on textual tokens, as the prefix and suffix of different textual prompts at the same position might have very different semantics and meanings? Wouldn't this be harmful for the claimed interpretability and controllability?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This paper provides a promising and efficient PEFT method for MLLMs as an alternative to the commonly used LoRA-based methods.\n2. The ablation studies are comprehensive for a better understanding of the proposed method and hyper-parameter design choices."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper adopted the parameter-efficient fine-tuning method Representation Tuning to the multimodal large language model domain. This paper used different representation editors for the vision encoder, LLM, and cross-modality projectors to optimize the visual representation, cross-modality representation, and multimodal representation. Experiments on several MLLM and image classification benchmarks show the efficiency and effectiveness of the proposed method. The paper further conducted controllability studies on image classification benchmarks to show the possible controllability of the proposed methods. Extensive ablation studies are furhter discussed for the designs of several hyper-parameters of the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper directly adapts the representation learning method in LLM to the MLLM in a rather straightforward way thus the technical contribution is limited.\n2. The benchmark selection for comparisons is not comprehensive and convincing enough. MME is a relatively small-scale MLLM benchmark that has non-trivial variances. The paper should include more comprehensive and commonly used multimodal benchmarks like SEED, MMBench, MMMU, GQA, VQAv2, ChartQA, and DocVQA for more convincing comparisons.\n3. The performance of other methods might have some problems. In Table 1 of this paper, the lora baseline is significantly worse than the full-finetuning one, but according to LLaVA's results (https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md), the performances should be similar. Besides, the proposed method adds tunable parameters in the vision encoder while other baselines use frozen vision encoders. Baselines with unfrozen vision encoders should also be added for comparison.\n4. As the proposed method brings additional parameters and time costs in the inference phase, the efficiency advantage of the proposed method should be clearly verified in the training phase, including GPU memory usage and training speed comparisons,\n5. To validate the generalization ability of the proposed method, the authors should include experiments with different vision encoders + different LLM types and sizes.\n6. Although the paper claimed that the proposed method brings more interpretability and controllability to MLLMs compared with common practices, it is hard to see how could the method really help the interpretability and controllability of MLLMs. The controllability study is too customized with human priors and provides little help for MLLM controllability in the general case."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Multimodal Representation Tuning for Zero-shot Multimodal Instruction Learning"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024reimagining,\ntitle={Re-Imagining Multimodal Instruction Tuning: A Representation View},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zxg6601zoc},\nnote={under review}\n}"
},
"abstract": {
"value": "Multimodal instruction tuning has proven to be an effective strategy for achieving zero-shot generalization by fine-tuning pre-trained Large Multimodal Models (LMMs) with instruction-following data. However, as the scale of LMMs continues to grow, fully fine-tuning these models has become highly parameter-intensive. Although Parameter-Efficient Fine-Tuning (PEFT) methods have been introduced to reduce the number of tunable parameters, a significant performance gap remains compared to full fine-tuning. Furthermore, existing PEFT approaches are often highly parameterized, making them difficult to interpret and control. In light of this, we introduce Multimodal Representation Tuning (MRT), a novel approach that focuses on directly editing semantically rich multimodal representations to achieve strong performance and provide intuitive control over LMMs. Empirical results show that our method surpasses current state-of-the-art baselines with significant performance gains (e.g., 1580.40 MME score) while requiring substantially fewer tunable parameters (e.g., 0.03% parameters). Additionally, we conduct experiments on editing instrumental tokens within multimodal representations, demonstrating that direct manipulation of these representations enables simple yet effective control over network behavior."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Representation Tuning",
"Large Multimodal Models",
"Parameter-efficient Fine-tuning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/42d47f0ef5576a2ec6fac5a5e92b4a865f634dec.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Re-Imagining Multimodal Instruction Tuning: A Representation View"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zxqdVo9FjY | Generalization for Least Squares Regression with Simple Spiked Covariances | main | Active | Generalization;Random Matrix Theory;Spiked Covariance;Two Layer Network;Layer Wise Training | learning theory | 3;3;3;5;5 | 3;4;4;3;3 | 2;3;2;3;2 | 2;2;1;3;1 | 2;3;1;4;3 | 3.8 | 3.4 | 2.4 | 1.8 | 2.6 | -0.666667 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "We thank the reviewers for their comments and help in improving the paper and hope that our responses with the new results have improved their opinions. If there are further questions that we can answer, we would be happy to continue the discussion."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "> In footnote 3 (Line 266), the authors say \"... \n\nIf we use these results, then similar to Eqautuons C.23 in Ba et al. 2022 and Equation (5) in Moniri et al. 2023, we would have that the value of Stieljtes transform is given to us as the unique solution to a set of consistency equations. Hence, we would replace the **explicit** values in Lemmas 7 and 8 with these **implicit** values. However, we did not do so since we wanted explicit closed-form expressions. However, please see the general response, showing how this can be achieved.\n\n> Why is there no regularization for the signal-plus-noise problem when there is regularization for the signal-only problem (Line 278-285)?\n\nThis limitation is primarily technical. The regularized case for the signal-plus-noise model introduces many additional terms whose mean and variance would need to be bounded, significantly complicating the analysis.\n\n> How do the authors arrive at \"Hence, we see that if the target vector y has a smaller dependence on the noise (bulk) component A, then we see that the spike affects the generalization error.\" in Line 380? Its connection to the previous part seems to be missing.\n\nHere we have that $ y_i = \\beta_*^Ta_i + \\beta_*^T z_i + \\varepsilon_i$. We see that for the bulk term: $a_i^T \\beta_* \\approx \\tau_A \\|\\beta_*\\|$. For the spike term: $z_i^T \\beta_* \\approx \\theta\\|\\beta_*\\|$\n\nUsing our scaling $\\theta = \\tau \\sqrt{n}$, the signal part is always larger. However, when the bulk also grows (i.e., $\\tau_A = \\Theta(d)$), the spike's effect becomes invisible. Specifically, we need:\n- $a_i^T \\beta_* = \\Theta(1)$\n- $z_i^T \\beta_* = \\Theta(\\sqrt{n})$\n(assuming $\\|\\beta_*\\|= \\Theta(1)$) for the spike to have a detectable effect.\n\n> Undefined symbols \n\nWe apologize. $f \\ll g$ means $f = O(g)$ and $f \\asymp g$ means $f \\ll g$ and $g \\ll f$\n\n> How do the authors come up with the equation for the peak point of double descent in Line 477? Is it an empirical observation or a theoretical result?\n\nThis is currently empirical. \n\nIt can be proved by computing the expression's derivative (and second derivative) and evaluating it at the derivative. The derivative expressions are quite involved; even symbolic programs such as SciPy struggled. **Hence, this leads us back to our contribution to simplified expression**. While our expressions are further simplified compared to Cui et al. 2024, we believe further simplification is only a positive.\n\n> Typos\n\nThe reviewer is correct in identifying typos. These shall be fixed."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": {
"value": "Part 2"
},
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "We thank the reviewer for their detailed feedback. Let us address the key points:\n\n> Limited contribution/novelty... Most of the results in this paper are trivial extensions of the results by Hastie et al. (2022) and Li & Sonthalia (2024)\n\nWe respectfully disagree. Our contributions extend beyond prior work in several important ways:\n\n1. Finite vs Asymptotic Analysis: Prior work focused on asymptotic results. We provide finite-sample corrections that reveal how spikes affect generalization. We show when these corrections matter (small bulk variance) and when they don't (large bulk variance)\n\n2. Technical Novelty: \n - As the reviewers point out, we allow one eigenvalue to diverge compared to Hastie et al. 2022. This is a significant difference. \n\n - Compared to Li and Sonthalia 2024. Only one of their two models allows for an eigenvalue to diverge. This model is closely related to the Signal only model in this paper. However, we have output noise $\\varepsilon$. This creates many new dependencies requiring novel analysis techniques\n - Consider the proof sketch on page 10. Line 491 shows that for the signal-only problem, our solution is the solution from the Li and Sonthalia 2024 paper plus an extra term. The $\\hat{\\varepsilon}$ is not isotropic in the ridge regularied version. Hence, we do not immediately have free independence between $\\hat{Z}+\\hat{A}$ and $\\hat{\\varepsilon}$. Hence, we need to be very careful about the alignment between the two. \n - Hence, we get terms that are cubic and quartic in eigenvalues (vs quadratic in prior work). These required the development of new concentration bounds for these higher-order terms. See Lemmas 17 through 22. \n\n The Li and Sonthalia 2024 paper does not consider the signal plus noise case, which introduces further terms that need bounding. Finally, the Li and Sonthalia 2024 only consider the case when $\\tau_A = 1$ or more broadly $\\tau_A = \\Theta(1)$, we allow $\\tau_A = \\Theta(\\sqrt{d})$. **This is also significant**.\n\n> Although the paper is motivated by Moniri et al. (2023), there are significant discrepancies.\n\nWe agree. Please see the general response on how we can handle the dependency. \n\nAdditionally, we think of the difference in the targets as a strength of the paper, as we can show things as the following. \n\n1. Relationship between targets and spike size:\n - For targets depending on the bulk (input data), large spikes are crucial\n - The required spike size scales with input dimension and dataset size\n \n2. Importance of spike-target alignment:\n - The alignment between spike direction and targets significantly affects generalization\n - This alignment term exhibits its own double descent behavior\n - Small alignment improvements can yield large generalization gains\n\n3. Double descent characteristics:\n - Peak location depends on bulk variance and regularization strength\n - Suggests weight decay regularization primarily affects the bulk, not learned features (spike)\n\n> There exists a related work (Cui et al., 2024)... Dandi et al. (2024)\n\nThank you for bringing these to our attention. The second is quite new and, as the reviewer points out, was only posted well after the submission deadline. Hence, we believe that it is concurrent work and should **not** affect the review of our paper. We shall nonetheless discuss the two papers in the revision. \n\nThere are many differences to Cui et al. 2024\n\n1. We work in a more restricted setting but provide rigorous proofs. \n\n2. We simplify expressions. For example, $\\zeta$ in equation 17 in Cui et al. 2024 is exactly $\\xi - 1$ in our paper (see Lemma 13 in the appendix for a definition). The expression in Cui et al. 2024 is left in terms of $\\zeta$. **This is because the results are a product of dependent terms. Hence, simplification is not easy**. However we\n a. Compute the expectations and variances of each of the terms \n b. Compute the expectations of the products. \n c. Greatly simplify expressions\n These simplifications are a major strength of our work, as the expressions are interpretable without numerical computations. \n\n> In Line 178, denotes the case with a single spike (as shown by Moniri et al., 2023)...\n\nWe only consider the single spike case. The analysis here is similar to Sonthalia and Nadakuditi 2023. Kaushik et al. 2024 extend Sonthalia and Nadakuditi 2023 to the higher rank version. We would need to do the same to extend to multiple spikes. As mentioned before, the difficulty in the analysis was\n a. Compute the expectations and variances of each of the terms \n b. Compute the expectations of the products. \n c. Greatly simplify expressions\n\nThese are currently scalar expressions, so we can use commutativity. For multiple spikes, we have matrix expressions, so we no longer have commutativity. The analysis is possible, but it is just quite tedious.\n\nWe ignore the $o(\\sqrt{n})$ we shall highlight this as another discrepensacy"
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": {
"value": "Part 1"
},
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "> The authors claim l.083 that Moniri et al. (2023) do not quantify the test error after one gradient step. To the best of my understanding, they do provide such a tight characterization (Theorem 4.5). Could the authors clarify their claim, and emphasize how their work is positioned with respect to Moniri et al. (2023)?\n\nWe should have been more precise in our claim. While Theorem 4.5 in Moniri et al. 2023 shows that the difference between the test error using $F_1$ (features after 1 step) and $F_0$ (initial features) converges to a constant, there are important differences in our approach:\n\n1. Moniri et al. 2023 have expressions requiring solutions to fixed point equations (see equations (5) in their paper). These only hold asymptotically with hard-to-quantify approximation rates. \n\n2. In contrast, we provide:\n\n - **Closed form expressions** for the risk itself (not just differences as is the case in Moniri et al. 2023)\n\n - **Better control on approximation error rates**, enabling analysis of finite matrices\n\n - The above two allow better control in understanding the relationship between the bulk and spike. \n\n> I find the discussion l.332-355 somewhat confusing\n\nWe apologize for the unclear notation. We use the Vinogradov notation where $f \\ll g$ means $f = O(g)$. Therefore, setting $\\theta^2 = \\tau^2 n$ is consistent with our assumptions.\n\n> I believe more intuition about the different scaling considered would help solidify the intuition for the spn case\n\nYou raise an excellent point. Let's clarify the scaling relationships:\n\n1. For the bulk term: $a_i^T \\beta_* \\approx \\tau_A \\|\\beta_*\\|$\n\n2. For the spike term: $z_i^T \\beta_* \\approx \\theta\\|\\beta_*\\|$\n\nUsing our scaling $\\theta = \\tau \\sqrt{n}$, the signal part is always larger. However, when the bulk also grows (i.e., $\\tau_A = \\Theta(d)$), the spike's effect becomes invisible. Specifically, we need:\n\n- $a_i^T \\beta_* = \\Theta(1)$\n\n- $z_i^T \\beta_* = \\Theta(\\sqrt{n})$\n\n(assuming $\\|\\beta_*\\|= \\Theta(1)$) for the spike to have a detectable effect.\n\n> In 3.3, more discussion in the main text about why only the unregularized case is considered for the spn case, while generic is considered for the signal-only model, would be helpful for intuition, whether it is for technical reasons or because it is not interesting.\n\nThis limitation is primarily technical. The regularized case for the signal-plus-noise model introduces many additional terms whose mean and variance would need to be bounded, significantly complicating the analysis.\n\n> typos\n\nThank you for identifying these issues. We will correct all typos in the revision."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "We thank the reviewer for the feedback. \n\n> The paper is motivated by the spiked covariance from the one-step gradient feature learning in neural networks (Section 1). However, it did not show how the results can be applied to the feature learning scenario. I question the amount of contribution this paper provides.\n\nPrior work (Moniri et al. 2023 and Ba et al. 2022) established the existence of spikes for specific target types $y$ (single index models). Our work focuses on understanding the spike's effect through several novel contributions:\n\n1. We analyze various targets and alignments between spikes and targets, providing rigorous proofs of generalization bounds\n\n2. We demonstrate how asymptotic results may not capture behavior in finite networks\n\n3. We provide precise quantification of how spikes affect generalization in both finite and asymptotic regimes\n\n> The assumption in line 2221-222 and 253-255 is too strong. The analysis breaks down if there is dependency in the cross term. However, the paper did not show how big the difference the predicted result would be when there is dependence in the cross term. It is questionable if the result in this paper is applicable in realistic machine learning settings.\n\nWe have now removed this assumption. When considering the dependence structure from Moniri et al. 2023, the analysis actually simplifies. A key term in our analysis is the projection of the spike direction on the bulk eigenvectors. Due to the dependence, this term becomes more tractable. Please see the general response.\n\n> What novel results could the authors conclude in the feature learning setting in neural networks using the main theorems 3,4?\n\nOur analysis reveals several important insights for feature learning:\n\n1. Relationship between targets and spike size:\n\n - For targets depending on the bulk (input data), large spikes are crucial\n\n - The required spike size scales with input dimension and dataset size\n \n2. Importance of spike-target alignment:\n\n - The alignment between spike direction and targets significantly affects generalization\n\n - This alignment term exhibits its own double descent behavior\n\n - Small alignment improvements can yield large generalization gains\n\n3. Double descent characteristics:\n\n - Peak location depends on bulk variance and regularization strength\n\n - Suggests weight decay regularization primarily affects the bulk, not learned features (spike)\n\nWhile some of these phenomena have been observed before, we provide simplified, quantitative connections between them.\n\n> How could the authors show the assumption on the dependence does not affect the result? Is there any experimental validation?\n\nWe provide new theoretical results that explicitly handle the dependence structure. Please see the general response. \n\n> From Figure 4.1, we can see that the effect of the spike correction term is small when is large. Is the main theorem still useful to explain the phenomenon we see from feature learning?\n\nYes - prior work shows that the spike represents the learned feature. Our results allow for larger spikes than previously considered in works like Hastie et al. 2022. However, this shows that for the spike to effect the generalization, we need even bigger spikes. \n\n> This paper has problems with the wordings, even in main theorems. This makes the reading difficult. For instance:\n\nThank you for identifying these issues. We will fix all typos and improve clarity in the revised version."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "We thank the reviewer for their comments. \n\n> They reference the work of Moniri et al., but this work is unrelated to neural networks or gradient descent; it addresses a purely linear regression problem for data with simple spiked covariances.\n\nWe respectfully disagree. Our work is directly motivated by and connected to neural networks through the following chain of reasoning:\n1. Ba et al. (2022) and Moniri et al. (2023) show that after one gradient step, the feature matrix $F_1$ can be written as $ F_0 + P$, where $P$ is a rank-$ell$ matrix. \n2. This creates a spiked covariance structure in $F_1^TF_1$.\n3. To understand the generalization error of such networks, we need to analyze least squares regression with $F_1$ as the feature matrix.\n4. Our work studies this exact setting, though in a simplified form, to make the analysis tractable. \n5. In the general rebuttal, we removed many of our simplifications. This further strengthens the connection\n\n> They do not account for the generalization ability of neural networks after a single gradient step, as they bypass the gradient step entirely by assuming the W1 matrix directly, which does not reflect the full process of neural network training.\n\nWe agree with the reviewer. Building on the results from Ba et al. 2022 and Moniri et al. 2023, our new results take us towards understanding the generalization error for two-layer networks. \n\nOur analysis provides valuable insights:\n- It shows how spikes affect generalization in finite vs asymptotic regimes\n- It demonstrates the importance of the alignment between the spike direction and the target function\n\nWe never meant our paper to claim that we understood the generalization error for two-layer networks, as reflected in our title's focus on least squares regression.\n\n> Could you provide a reference for the statement, 'It has been shown that to understand the generalization...' on line 39?\n\nIn addition to the RMt paper cited in the paper, see [1] for an empirical result on more realistic networks. \n\n[1] Martin and Mahoney, \"Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning\", JMLR 2021.\n\n> Is your generalization analysis very different from the work of Li & Sonthalia (2024)?\n\nCompared to Li and Sonthalia 2024. Only one of their two models allows for an eigenvalue to diverge. This model is closely related to the Signal only model in this paper. However, we have output noise $\\varepsilon$. This creates many new dependencies requiring novel analysis techniques. \n\n1. Consider the proof sketch on page 10. Line 491 shows that for the signal-only problem, our solution is the solution from the Li and Sonthalia 2024 paper plus an extra term. The $\\hat{\\varepsilon}$ is not isotropic in the ridge regularied version. Hence, we do not immediately have free independence between $\\hat{Z}+\\hat{A}$ and $\\hat{\\varepsilon}$. Hence, we need to be very careful about the alignment between the two. \n\n2. Hence, we get terms that are cubic and quartic in eigenvalues (vs quadratic in prior work). These required the development of new concentration bounds for these higher-order terms. See Lemmas 17 through 22. \n\nThe Li and Sonthalia 2024 paper does not consider the signal plus noise case, which introduces further terms that need bounding. Finally, the Li and Sonthalia 2024 only consider the case when $\\tau_A = 1$ or more broadly $\\tau_A = \\Theta(1)$, we allow $\\tau_A = \\Theta(\\sqrt{d})$. **This is also significant**."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "We thank the reviewer for the feedback and comments. Key differences between our work and important prior research are that we (1) provide finite matrix correction terms and (2) offer simplified closed-form expressions.\n\n> Could the authors comment on the link between their results and (Ba et al. 2022) in the context of Gaussian Universality (see e.g. [1]) ? \n\nYes, the problem we and prior work are interested in understanding is the generalization error for the following. First, we solve a regression problem\n$$ \\beta\\_{LS} = argmin ||y - \\beta^T F||\\_F^2 + \\lambda ||\\beta||\\_2^2 $$\nThen, we are interested in the generalization performance of $\\beta\\_{LS}$. Let's call this risk $R(F)$ to highlight the dependence on $F$. The difference between setups lies in the $F$ term. There are three different $F$'s considered.\n\n1. $F_{CK} := \\sigma(WX)$. For Gaussian $X$ and $W$, after taking a step of GD. This is from Ba et al. 2022\n2. $F_{CE} := \\theta_1 WX + \\theta_2 A$ where $A$ has IID standard Gaussian entries independent of $W,X$ and $W$ is after taking gradient step of GD\n3. $F_{SP} := A + \\theta uv^T$, where $A$ has IID standard Gaussian entries. \n\nThe Ba et al. 2022 paper shows that in the small learning rate regime, $R(F_{CK}) = R(F_{CE})$ **asymptotically**.\n\nHowever, we do things differently:\n\n1. We allow spikes from the large learning rate limit. Hence, the result from Ba et al. 2022 does not apply. Equation 3.1 in Ba et al. shows that for small learning rate, the size of the spike is $\\Theta(1)$, where as for large learning rates, it is $\\Theta(\\sqrt{d})$ (note for the rank one spike the Frobenius norm is equal to the spectral norm). We are interested in the case when the spike is large. The idea behind the large step size is that we are in a regime in **which the Gaussian Equivalence Property is no longer true**. \n\n2. We provide more precise correction terms for finite matrices. While the prior work is purely asymptotical. \n\n3. In our rebuttal, we also generalize to models closer to that from Moniri et al. \n\n> One additional weakness of this submission is the related works coverage. \n\nWe thank the reviewer for pointing us to these works. We shall add these references. \n\n[5] characterizes the risk for the setting from Moniri et al. 2023 using, as the reviewer and the paper says, using the non-rigorous replica symmetry method. The differences are three-fold:\n\n1. Our results in the paper are for a restricted setting; however, we provide proof. \n2. We simplify expressions. For example, $\\zeta$ in equation 17 in Cui et al. 2024 is exactly $\\xi - 1$ in our paper (see Lemma 13 in the appendix for a definition). The expression in Cui et al. 2024 is left in terms of $\\zeta$. **This is because the results are a product of dependent terms. Hence, simplification is not easy**. However we\n a. Compute the expectations and variances of each of the terms \n b. Compute the expectations of the products. \n c. Greatly simplify expressions\n\nWe believe these are the main challenges we overcome in our proof.\n\n> Could the authors elaborate on the connection with Moniri et al. 2024?\n\nWhile Theorem 4.5 in Moniri et al. 2023 shows that the difference between the test error using $F_1$ (features after 1 step) and $F_0$ (initial features) converges to a constant, there are important differences in our approach:\n\n1. Moniri et al. 2023 has expressions requiring solutions to fixed point equations (see equations (5) in their paper). These only hold asymptotically with hard-to-quantify approximation rates. \n\n2. In contrast, we provide:\n\n - **Closed form expressions** for the risk itself (not just differences as is the case in Moniri et al. 2023)\n\n - **Better control on approximation error rates**, enabling analysis of finite matrices\n\n - The above two allow better control on understanding the relationship between the bulk and spike. \n\n> What is the bottleneck for analyzing multiple spikes?\n\nThe analysis here is similar to Sonthalia and Nadakuditi 2023. Kaushik et al. 2024 extend Sonthalia and Nadakuditi 2023 to the higher rank version. We would need to do the same to extend to multiple spikes. As mentioned before, the difficulty in the analysis was bounding variances. These are currently scalar expressions hence we can use commutativity. For multiple spikes we have matrix expressions. Hence no longer have commutativity. The analysis is possible, but it is just quite tedious.\n\n> maximal scaling regime \n\nWe don't know this regime. Does the reviewer mean when the step size is too small and we do not see a spike or the regime from [1] where the spectrum becomes heavy-tailed? In the heavy-tailed situation analysis similar to [2] can be used.\n\n[1] Martin and Mahoney JMLR 2021 - Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning\n\n[2] Wang et al. 2024 AISTATS - Near-interpolators: Rapid norm growth and the trade-off between interpolation and generalization"
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "A common criticism among reviewers was our abstraction of the dependency between bulk and spike components. Here we demonstrate how our proof framework extends to handle the dependent case from Moniri et al. 2023.\n\nRecall that Moniri et al.'s spike structure is:\n$$ \\sigma(W_0\\tilde{X}^T) + c (\\tilde{X}\\beta_{sp}) \\zeta^T $$\nwhere $\\tilde{X}$ is Gaussian data, $W_0$ is inner layer weights, and $\\zeta$ are outer layer weights.\n\nWe modeled this as:\n$$ A + \\theta vu^T $$\nwhere $A$ is Gaussian. Below we show how our analysis extends to Moniri et al.'s setting.\n\n## Introducing Dependence\n\nFirst, consider the intermediate structure:\n$$ X = A + \\theta (A\\beta_{sp})u^T $$\n\nTo analyze this, we only need to modify two parts of our proof:\n\n1. In Lemma 10, the norm of $v^T A^\\dag$ scaling changes:\n - Original: $\\mathbb{E}_{\\lambda}\\left[\\frac{\\lambda}{(\\lambda + \\mu^2)^2}\\right]$\n - New: $v = A\\beta\\_{sp}$, and norm becomes $\\|\\beta\\_{sp}\\|$\n\n where expectations are over the Marchenko-Pastur distribution.\n\n2. The variable $t = (I-AA^\\dag) v$ is no longer zero.\n\nFor the Signal-Only case, this gives bias:\n$$ \\frac{\\theta_{tst}^2}{n_{tst}}\\left[(\\beta_*^T u)^2 + \\tau_{\\varepsilon}^2\\left(\\frac{(1+c)}{2T_1} + \\frac{\\mu^2 c - T_1}{2\\tau_{A_{trn}}^2 T_1} \\right)\\right] $$\nwhere $T_1$ is unchanged. We do not present the whole formula for brevity. To extract insights, we consider the same simplifications for the paper.\n\nUnder simplifications ($\\mu = 0$, $\\tau_{A_{trn}} = \\tau_{A_{tst}}$, $\\theta = \\tau \\sqrt{n}$), for $c > 1$ we get:\n$$ \\tau_{A}^2 (\\beta_*^Tu)^2\\left(1+\\frac{\\tau_A^2}{c}\\|\\beta_{sp}\\|^2\\right) + \\tau^2_{\\varepsilon}\\frac{c}{c-1}\\left(2 + \\frac{\\tau_A^2}{c}\\|\\beta_{sp}\\|^2\\right) $$\n\nNote: Due to the extra $A$ factor, this only holds for $\\tau_A = \\Theta(1)$ versus our original $\\tau_A = O(\\sqrt{d})$. \n\nNote: this is the Signal only version so $y = \\theta (A \\beta\\_{sp}) u^T\\beta\\_*$. \n\n## Full Moniri et al. Structure\n\nNow consider:\n$$ X = \\sigma(W_0A^T) + c (A\\beta_{sp}) \\zeta^T $$\n\nThe risk becomes a random variable dependent on $W_0$, $\\zeta$, and $\\beta_{sp}$. Using standard assumptions (isotropic with unit expected norm for the $\\zeta$, and $\\beta_{sp}$ and the rows of $W_0$), we analyze the expected risk. Importantly, since $W_0$ and $(\\beta_{sp}, \\zeta)$ are independent, the bulk remains independent of the spike. This is because functions of independent random variables are independent. Hence, our assumptions are reasonable. \n\nTo get the generalization error for this model, we need to replace Lemmas 7-9. As an example, the new Lemma 7 resembles equations from Moniri et al. (Eq. 5) and Ba et al. (Eq. C.23):\n\n**Lemma 7:** For $W_0$ ($m \\times d$), $X$ ($n \\times d$), $m < n$, with $d/n \\to \\phi$, $d/m \\to \\psi$, $m/n \\to c$:\n\n1. $\\mathbb{E}\\left[\\frac{1}{\\lambda+\\mu^2}\\right] = \\frac{c}{\\tau_A^2}m_c\\left(-c\\frac{\\mu^2}{\\tau_A^2}\\right)$\n2. $\\mathbb{E}\\left[\\frac{1}{(\\lambda+\\mu^2)^2}\\right] = \\frac{c^2}{\\tau_A^4}m_c'\\left(-c\\frac{\\mu^2}{\\tau_A^2}\\right)$\n3. $\\mathbb{E}\\left[\\frac{1}{(\\lambda+\\mu^2)^2}\\right] = \\frac{c^3}{2\\tau_A^6}m_c''\\left(-c\\frac{\\mu^2}{\\tau_A^2}\\right)$\n\nwhere $m_c(z)$ satisfies:\n$$\\frac{\\psi}{z} H(z) - \\frac{\\psi - 1}{\\psi} = m_c(z)$$\n$$H(z) = 1 + \\frac{H^\\phi(z)H^\\psi(z)(c_1 - c_2)}{\\psi z} + \\frac{H^\\phi(z) H^\\psi(z)c_2}{\\psi z - H^\\phi(z)H^\\psi(z)c_2}$$\nwith $H^\\kappa(z) = 1 - \\kappa + \\kappa H(z)$\n\n---------\nAdvantages of our approach\n---------\n\nThis approach avoids using the Gaussian Equivalence property, providing finer control over finite matrix approximation errors. While we must restrict $\\tau_A$ and $\\theta$ magnitudes and take expectations over $W_0$ and $\\zeta$, this allows us to:\n1. Better understand finite matrix effects\n2. Explore different target functions than Ba et al. and Moniri et al.\n\nWe are happy to represent the corresponding results for the Signal Plus Noise case and the missing details. We presented a shortened version for brevity. \n\n----------\n\nWe are currently still working on the revision and will post it shortly."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": {
"value": "Introducing Dependency Between Bulk and Spike"
},
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "As hinted above my main concern on this manuscript is the close relationship with previous works, namely (Ba et al., 2022; Moniri et al., 2024). \n\nCould the authors comment on the link between their results and (Ba et al. 2022) in the context of Gaussian Universality (see e.g. [1]) ? From my understanding of their paper, i.e. a single spike in the feature matrix, they show that in the learning rate regime considered in this paper Gaussian Universality should hold. There is indeed an extensive regime of learning rates after the BBP transition that still falls under the umbrella of Gaussian models, resulting in effectively \"linear\" generalization properties. \n\nOne additional weakness of this submission is the related works coverage. The authors do a great job in covering the random matrix theory literature, while many manuscripts that analyze learned representations with gradient descent with different tools are not properly mentioned, see e.g. [2,3,4]. Although in these works the authors do not focus on the exact asymptotic calculation of the test error, many insights should translate to the present setting. On the other hand, [5] precisely characterize the generalization error using non-rigorous methods; what is the relationship with the present work?\n\nThe results in the present submission should relate directly to the ones in Section 4 of (Moniri et al. 2024), albeit the differences correctly reported by the authors in the two settings. Could the author elaborate on this? \n\nWhat is the bottleneck for the present thereotcial tools to analyze multiple spikes (corresponding to higher learning rate scaling in Moniri et al. 2024)? \n\nClosely related to the above, [5] worked along the lines of (Moniri et al. 2024) to provide the equivalent description in the regime where the spikes recombine with the bulk (maximal scaling regime). Do the authors see a possible extension of their analysis to this scaling? \n\n\n- [1] Hu & Lu 2022, Universality laws for high-dimensional learning with random features. \n- [2] Damian et al. 2022, Neural networks can learn representations with gradient descent. \n- [3] Dandi et al. 2023, How two-layer neural networks learn, one (giant) step at a time.\n- [4] Ba et al. 2023, Learning in the presence of low-dimensional structure: a spiked random matrix perspective.\n- [5] Cui et al. 2024, Asymptotics of feature learning in two-layer networks after one gradient-step."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper is nicely written. The mathematical claims are correctly phrased and the numerical illustrations are coherent with the main text. The research problem is relevant in the theoretical machine learning community."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors analyze the generalization properties of spiked covariate models. The theoretical analysis is motivated by recent works on two-layer networks trained with a single gradient step that showed how the feature matrix possesses different spikes associated with the learning rate scaling used in the optimization step. The proof scheme uses tools coming from random matrix theory that enables the asymptotic computation of the generalization error. The theoretical claims are accompanied by coherent numerical illustrations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My main concern with the present submission is the lack of clear elements of novelty. The paper heavily relies on results coming from related works and it restricts their setting in many ways (as fairly reported by the authors at the end of the manuscript). More details are provided below."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. In Line 178, $F_1$ denotes the case with a single spike (as shown by Moniri et al., 2023). However, Moniri et al., 2023 showed that $F_1$ can include multiple spikes, and the number of spikes depends on the step size of the gradient step. Where is the discussion about the effect of step size in this paper? Similarly, where is the discussion on the impact of $o(\\sqrt{n})$ term for $F_1$?\n\n2. What is $l_j$ in Theorem 2 (Line 204)? Do the authors mean $l$?\n\n3. In footnote 3 (Line 266), the authors say \"... the limiting e.s.d for $F_0$ is not necessarily Marchenko-Pastur distribution ... This difference is not too important, as instead of using the Stieltjes transform for the Marchenko-Pastur distribution in our paper, we could use the result from Péché (2019); Piccolo & Schröder (2021) instead.\" Why wouldn't the authors directly use the mentioned result directly?\n\n4. Why is there no regularization for the signal-plus-noise problem when there is regularization for the signal-only problem (Line 278-285)?\n\n5. Typo in Line 285: \"We consider on the instance-specific risk.\". Typo in Line 313: \" Then, any for data ...\". \n\n6. Undefined symbols in Theorem 3 (Line 312 - 324): $\\asymp$ and $<<$.\n\n7. How do the authors arrive at \"Hence, we see that if the target vector y has a smaller dependence on the noise (bulk) component A, then we see that the spike affects the generalization error.\" in Line 380? Its connection to the previous part seems to be missing.\n\n8. How do the authors come up with the equation for the peak point of double descent in Line 477? Is it an empirical observation or a theoretical result?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The motivation for this paper is good since the recent line of work studying two-layer neural networks after one gradient step (Ba et al., 2022; Moniri et al., 2023) has received significant attention.\n* The authors precisely characterize generalization errors (risk) for two linear regression problems with spiked covariance data, while the problems differ regarding the target function.\n + They provide bias and variance decomposition of the risk.\n + They illustrate the \"double-descent phenomenon\" and provide a formula for the peak location (a.k.a interpolation threshold) of the double-descent phenomenon, which is beneficial for understanding the phenomenon.\n + The authors specifically focus on the impact of the spike (in the data model) on the risk for different cases. Thus, they show when and how the spike affects the generalization error."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Motivated by a recent work studying two-layer neural networks (Moniri et al., 2023), the paper studies linear regression under a data model with a spiked covariance (Couillet & Liao, 2022). The spiked covariance consists of a spike component (signal) and a bulk component (noise). Thus, the authors characterize the risk (a.k.a generalization error) with a specific focus on the effect of the spike. They find that the spike does not impact the risk in the underparameterized case. In contrast, the spike introduces an additional term (called \"correction term\") in the risk for the overparameterized case. However, they mention that the correction term is of order $O(1/d)$, which vanishes in the asymptotic case. Thus, the spike does not affect the risk in the asymptotic case but does in the finite case. Then, the authors focus on a case where the targets $y$ only depend on the signal (spike) component of inputs $\\mathbf{x}$ in order to highlight the effect of the spike on the risk. In this case, the correction term depends on the alignment between the spiked eigenvector $\\mathbf{u}$ corresponding to the spike and the target function $\\boldsymbol{\\beta}$. Furthermore, the paper illustrates how the generalization error for this setting exhibits the so-called double-descent phenomenon with a formula for the peak location (a.k.a interpolation threshold)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The presentation in this paper is not good\n + Although the paper is motivated by Moniri et al. (2023), there are significant discrepancies between the setting of this paper and that of Moniri et al. (2023), as the authors mention in Section 5. While Moniri et al. (2023) considered two-layer neural networks after one gradient step under isotropic data assumption, this work considers linear regression under spiked covariance data assumption. There exists a relationship between these two, but they are not exactly the same. For example, there is a difference between the target $y$ generation of the two settings. Furthermore, $\\mathbf{A}$ (noise component) and $\\mathbf{Z}$ (spike component) are dependent in the case of Moniri et al. (2023), while the dependence is ignored here (see lines 251-255).\n + Some notations are used without definition (e.g, $\\delta_{\\lambda_i}(\\lambda)$ in Line 126, or $\\Sigma(d_k)$ in Line 147).\n + There are significant typos in equations. For example, $y$ should be a scaler in Line 76, but it is written as a vector, which makes the equation wrong. Another example is that $l_j$ in Theorem 2 (Line 204) is not defined, and I think the authors meant $l$ instead of $l_j$. A third example is that the function $R_{spn}(c;\\tau,\\theta)$ defined in Line 301 and its usage $R_{spn}(c,0,\\tau)$ in Theorem 3 (Line 317-321) are different in terms of parameters.\n\n* Limited contribution/novelty\n + Most of the results in this paper are trivial extensions of the results by Hastie et al. (2022) and Li & Sonthalia (2024), which significantly limits the novelty and originality of the paper. Note that Hastie et al. (2022) studied linear regression under a generic covariance assumption with bounded eigenvalues. Here, some eigenvalues can diverge as dimensions go to infinity, but this case is also covered by Li & Sonthalia (2024).\n + There exists a related work (Cui et al., 2024) that is not mentioned in this paper. Cui et al. (2024) characterized the generalization error (risk) for two-layer neural networks after one gradient step under isotropic data (same setting as that of Moniri et al. (2023)). Although there exist methodological differences between (Cui et al., 2024) and this paper, the motivations are the same, and their settings are similar.\n + During the review period of this paper, a related work (Dandi et al., 2024) that can be considered as follow-up of (Cui et al., 2024) was appeared on arXiv. While Cui et al. (2024) used (non-rigorous) replica method from statistical physics for their analysis, Dandi et al. (2024) studied the same setting with random matrix theory, which is also the main tool in this paper. Therefore, this paper and (Dandi et al. 2024) studied similar settings with similar methodologies. Note that since (Dandi et al., 2024) appeared after the submission of this paper, I am only mentioning it for the sake of completeness.\n\nOverall, I think this paper should be rewritten with more focus on the impacts of the spike covariance on the generalization error of linear regression, and the new presentation should clearly differentiate the current work from the work by Hastie et al. (2022), Li & Sonthalia (2024), Cui et al. (2024), and Dandi et al. (2024).\n\nCui et al. (2024): Asymptotics of feature learning in two-layer networks after one gradient-step. (ICML 2024)\n\nDandi et al. (2024): A Random Matrix Theory Perspective on the Spectrum of Learned Features and Asymptotic Generalization Capabilities."
},
"withdrawal_confirmation": null
}
] |
|||||||
zyGrziIVdE | Exploration by Running Away from the Past | main | Active | Reinforcement Learning;Exploration;Deep Learning | reinforcement learning | 3;3;3;5 | 4;3;4;4 | 3;1;2;2 | 2;3;2;2 | 3;2;3;3 | 3.5 | 3.75 | 2 | 2.25 | 2.75 | 0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Why is the notion of state coverage used in the experiments a good proxy for evaluating and comparison exploration? What is lost by ignoring the remaining dimensions in each of the environments considered?\n2. What is the value of $n$ in Fig. 2? Can you provide additional context about the rewards pictured in Fig. 2?\n3. Why is $r_W$ better than $r_{KL}$ in Fig. 2? This is mentioned in the paragraph starting line 377, but remains unclear.\n4. What do the colors represent in Fig. 3?\n5. Do the figures in Sec. 5.1 provide any insight into what is happening in the rest of the state space? Why not consider a visualization technique for visualizing high-dimensional data, such as $t$-SNE or PHATE plotting, instead of projecting onto x-y space?\n6. What do $\\pi$ and $\\pi'$ of Theorems 2 and 3 correspond to in the RAMP method and the remainder of the paper?\n7. How do Theorems 2 and 3 apply to the rest of the paper?\n\n**Important additional comment:** It is stated at several points throughout the paper (line 047, lines 325-327, 330-332, 467-469, 682-684) that state entropy maximization methods like APT [Liu & Abbeel, 2021] rely on probability density estimation. This is not accurate: APT and similar methods (e.g., Proto-RL [Yarats et al., 2022]) leverage non-parametric $k$-nearest neighbor entropy estimators, allowing them to maximize (proxies of) state occupancy measure entropy while avoiding density estimation."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Though the problem of state space exploration is very extensively covered in the RL literature, the proposed RAMP method provides what appears to be a novel approach to accelerating state space coverage. Due to its strategy of choosing policies maximizing divergence of state space coverage from that achieved by previous policies, it makes sense that RAMP will be more effective at rapidly exploring the state space than existing unsupervised RL methods (e.g., APT, SMM, Proto-RL) that simply maximize state occupancy measure entropy, and the experiments provide some support to this. Moreover, though the actual learning procedure used in RAMP is essentially a combination of existing techniques ([Eysenbach et al., 2020] for $r_{KL}$, [Durugkar et al., 2021] for $r_{W}$, and SAC [Haarnoja et al., 2018]), the combined approach detailed in Sec. 3.4 and Algorithm 1 appears to be novel and is interesting, and the fact that both KL-divergence and Wasserstein distance versions of RAMP are provided adds to its flexibility and significance. For these reasons, RAMP is likely of interest to the community and definitely merits further investigation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes RAMP (Running away from the past), an RL-based method for performing state space exploration by approximately maximizing either the KL divergence or Wasserstein distance between the current policy's state occupancy measure and the discounted sum of the state occupancy measures of all previous policies. This scheme aims to ensure that the state space coverage provided by the next policy is always maximally different from that provided by previously policies. The paper develops the RAMP method by deriving tractable proxies for these divergences, proposing reward models for each that can be used in conjunction with an RL algorithm, providing related approximation bounds, providing estimation schemes for each reward model based on existing work, and finally combining these steps to propose RAMP. Experimental results are provided that quantitatively illustrate what the reward models look like, compare RAMP with other intrinsic exploration approaches using a certain notion of state space coverage on a variety of tasks, and indicate that RAMP can be used as an exploration aid to accelerate extrinsic reward learning tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Despite the strengths discussed above, I have concerns about the experimental evaluation and theoretical results:\n1. Most importantly, the \"state coverage\" performance metric upon which the comparisons of Sections 5.2 and A.1 rely is insufficiently justified as a good proxy for measuring exploration and for making fair comparisons between the algorithms considered. As described in the third paragraph of Sec. 5.2, this metric is obtained by discretizing the space of Euclidean (x-y or x-y-z) coordinates of the agent's state space, recording whether each grid cell has been visited or not during training, then returning the percentage of the grid cells that have been visited. There are two main issues with using this notion of state coverage as a proxy for exploration. First, the state space dimensions in most of the environments are far larger than 2 or 3 (e.g., 18 for HalfCheetah, 113 for Ant), and, for many of these environments, pose information other than location in Euclidean space (e.g., joint angles, velocities) is far more important for learning to operate within the environment and for specific downstream tasks. Second, recording only whether a grid cell has been visited or not ignores more complex visitation behavior, such as the empirical state visitation frequency defined at the beginning of Sec. 2. To render the state coverage metric used more meaningful, it would be helpful to include ablations over the other dimensions of $S$ or comparison with other coverage notions, such as Shannon entropy of the empirical state visitation frequency.\n2. Implementation details for the RAMP algorithm, the algorithms compared with, the discretization used in the state coverage metric, and other aspects of the experiments are not provided. The experimental results are therefore not reproducible in their current form. In addition, across all experiments, the lack of implementation details makes it difficult to assess the fairness of comparison with existing methods and even the comparisons between $RAMP_{KL}$ and $RAMP_{W}$. This makes it difficult to evaluate the significance of the experimental results, weakening the overall contribution. To remedy these issues, a thorough description of the implementation details is needed.\n3. The qualitative results in Sec. 5.1 are difficult to understand, leaving the practical differences between $r_{KL}$ and $r_W$ unclear. See the questions below for specific concerns.\n4. The connection between Theorems 2 and 3 and the rest of the paper is unclear, and the assumptions made are so strong as to immediately imply the results. For the former concern, a description of what $\\pi$ and $\\pi'$ of Theorems 2 and 3 correspond to in the RAMP method is missing, making it unclear how the results are meant to be applied. Regarding the second concern, it is assumed variously that $|| \\rho^{\\pi} - \\rho^{\\pi'} || \\leq \\varepsilon_0$, $|| \\hat{r} - r^{\\pi} || \\leq \\varepsilon_1$, and that the average reward $J_{\\hat{r}}(\\pi') = \\langle \\rho^{\\pi'}, \\hat{r} \\rangle$ is sufficiently larger than $J_{\\hat{r}}(\\pi) = \\langle \\rho^{\\pi}, \\hat{r} \\rangle$ to ensure that the desired inequalities hold. Under these assumptions, the proofs follow with some straightforward manipulation of inequalities. To make the results more consequential, it would be helpful to clarify how they are meant to be applied in the context of the paper, then weaken the assumptions accordingly."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Line 81, authors state that the methodology 'seamlessly' generalizes when T tends to infinity. From a theoretical perspective, does the limit exist without additional assumptions on the markov chain created from the MDP and the policy? From a practical point of view, are there any limitation to apply the method when T is large?\n2. Is a policy with maximum state entropy an optimal solution to the objective function that is maximized?\n3. There is a typo in equation (2): $\\rho^\\pi$ should be $\\rho^\\pi(s)$.\n4. The optimization problems for computing the intrinsic reward functions seem to be on-policy, is it the case? If it is the case, does it eventually result to on-policy optimization of control policies? If it is the case, it is worth mentioning that when used in combination with an off-policy RL algorithm for maximizing the intrinsic reward, addition interactions with the MDP are required making the modified algorithm on-policy. This point should be clear in Section 3.4.\n5. Could the authors clarify the arguments from paragraph line 329? I understand the philosophy of maximizing a lower bound on the entropy instead of directly maximizing the entropy. Yet, I think that both approaches incrementally improve the Shannon entropy, in opposition to the first sentence of the paragraph. I don't understand the argument of the generalization across behaviours."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1. The problem addressed is important to the community.\n2. The new objective function is theoretically motivated and provides new insights to compute good exploration policies."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors present a new algorithm for learning policies where the marginal distribution of states in a trajectory of length $T$ has a high entropy. Their method consists in iteratively maximizing intrinsic reward bonuses that measure a distance (metric) between the distribution of states of the current policy, and a geometric weighting of the distributions of states for the previous policies. That objective finds a motivation from an information theory property. Experiments compare the use of the KL-divergence and the Wasserstein distance to other algorithms."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. In Section 2, different justifications for introducing the learning objective pursued by the agent are wrong or weak in several aspects:\n\na. The justification line 108 for going from equation (1) to equation (2) is in my opinion wrong. Using the entropy of the policy as proxy to the entropy of the state distribution is a huge approximation. Maximizing the entropy of the policy does not provide a good state coverage in general nor in most practical cases. Note that if it was sufficient to maximize the entropy of the policy to get a uniform distribution of states, it would not be necessary to introduce a complex algorithm as the authors do.\n\nb. Line 128 authors justify to use the Wasserstein distance instead of the KL-divergence as the KL does not account for a potential geometry in the state space. This fact result from the original choice to define as exploration objective the entropy over the state space, which does not account for a potential geometry of the state space. So by choosing to maximize the Wasserstein distance instead of the KL, the authors change the original hypothesis that that the objective is to have high state entropy. While it can be discussed that it is a potential better framework to account for some geometry, it makes most of the previous mathematical justifications irrelevant.\n\n2. The authors claim in Section 3.4 that it is sufficient to optimize with any RL algorithm the reward model from Section 3.2 or Section 3.3 to maximize the objective equation (2) or equation (4). It is equivalent to neglecting the entropy of the policy. Authors, nevertheless, eventually use SAC, which is an algorithm that regularizes the MDP rewards with the log-likelihood of actions. This should be clarified.\n\n3. Only the final values are reported in the experimental section. From my personal experience, complex exploration methods may be unstable, and the learning curves provide important insights. Adding them in the paper would make the results more trustworthy.\n\n4. In the experiments, there is no statistical evidence that the method at hand outperforms the concurrent methods. Most confidence intervals overlap.\n\n5. I think that the related work should include [1, 2], and probably other, more recent, works."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "### === Theoretical gains of running away from the past ===\n\nI think the idea of RAMP makes sense in that for the algorithm to explore, it must do something different from the past. However there is always a trade-off in practice and one must balance exploration vs. exploitation, a factor that is heavily environment dependent. There are simply environments where such exploration is not needed at all, while others where exploration is needed.\n\nAt this point, deep RL literature has already accumulated a large varieties of exploration methods, each dedicated to a specific domain. I think it will be valuable if a more general purpose method such as RAMP can characterize the theoretical gains achieved by just maximizing the distributional divergence between the current policy and previous data distributions.\n\n### === Intrinsic reward alone ===\n\nTable 1 shows the max performance that can be achieved by different exploration methods using just the intrinsic reward. In a sense, it measures how extreme the performance can reach by just optimizing for the exploration bonus. It is quite a surprise to me that RAMP's intrinsic reward leads to max gain very much higher than most methods. I think it might also be beneficial to plot the distribution of rewards achieved by different methods, to robustly measure the range of performance achievable. After all, max is not a very robust estimate of the possible performance obtained by the policy.\n\nIt also seems that Wasserstein based approach is much higher than KL - given that both are motivated by the RAMP narrative, it seems that the specific choice of metric is also very critical to the algorithmic performance. Do you think the underlying metric that defines Wasserstein distance is also critical, ie L2 vs L1 distance. Such ablations will be quite valuable to practitioners.\n\n### === Extrinsic reward ===\n\nIn Table 2 where extrinsic rewards are combined, it seems that KL RAMP is better than Wasserstein RAMP in general, which is in opposite to the results in Table 1 where Wasserstein RAMP is generally better. Can you elaborate more on this?\n\nAlso in general in continuous control tasks, it seems that exploration is not a defining factor to the final performance - as opposed to certain exploration heavy tasks in atari suites. As a result, it is not very clear if the gains in performance are due to the exploration bonus itself or rather due to some other confounding factors as a result of adding the corresponding loss.\n\nIn practice, how would you choose the exploration vs. exploitation trade-off factor ($\\lambda$ and $\\beta$ factors in the algorithm), and are the algorithmic performance sensitive to the choice of such hyper-parameters?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The strength of the paper lies in a fairly clear presentation of the motivation and methodology. The idea of \"running away from the past\" is not strictly novel but the paper proposes an algorithmically viable way to instantiate such an idea. The paper presents a fairly clear math formulation and has carried out ablations on choices of the algorithmic designs. The experimental ablation also seems fairly comprehensive."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes an exploration paradigm of \"running away from the past\" (RAMP), which encourages the RL algorithm to generate trajectories in distribution different from the past. This is instantiated as an intrinsic exploration bonus that estimates the discrepancy between the current and past visitation density. They show improvements on a few benchmark deep RL algorithm, showcasing the potential for this approach."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The idea of \"running away from the past\" is not strictly novel. From a theoretical standpoint, running away from old trajectories might not always be optimal and it is not clear theoretically what is gained by adopting such an approach. From an empirical standpoint, the ablations are carried out on the continuous control tasks, most of which do not seem to require extensive exploration to solve. It is not very clear if the claimed gains are really due to the exploration bonus, or some other unknown side effect."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See Weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is written well, and the proposed intrinsic exploration objective is novel. The use of Wasserstein distance instead of the typical KL divergence is an interesting/novel choice."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a new intrinsic exploration objective for maximizing state entropy. The objective uses a discounted mixture of past state occupancy measures and encourages policies that maximize distance from the discounted mixture. As statistical distance, the KL divergence and Wasserstein distance are used. The experiments are evaluated on state-based RL environments, where state coverage and episodic returns are used to demonstrate the performance gain of the proposed approach."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) Prior Work/Baselines: The paper misses several crucial works on intrinsic exploration (c.f., 1, 2, 3 for a survey). Particularly, there are works that use the model epistemic uncertainty/disagreement as an intrinsic reward which works well in practice and also scales favorably (4., 5., 6.). \n2) Theory: In particular, the model epistemic uncertainty is theoretically a well-studied objective (7., 8.). In 8, the authors derive a connection between maximizing the model epistemic uncertainty and maximizing information gain/conditional entropy of the trajectories, while also showing convergence for sufficiently smooth dynamics. \n3) Unclear motivation: Given the theoretical and experimental strengths of the method discussed above, its unclear to me what particular gap the authors are trying to address with their intrinsic reward. I'd appreciate the authors elaborating further on this. Furthermore, I think all the aforementioned works should be discussed in the paper and in particular one of the baselines should use the model epistemic uncertainty as the intrinsic reward. Perhaps one weakness the authors might raise is that the aforementioned works are computationally more expensive as they have to learn an ensemble of networks to quantify disagreement. However, this should also be empirically shown in the experiments (as the proposed method also learns a model to estimate the intrinsic reward). \n4) Hyperparameters are not provided in the paper, which makes it difficult for me to assess how sensitive the results are to the choice of hyperparams. In particular, I am curious about how $\\beta$ affects the performance of the algorithm. How can we appropriately select $\\beta$? Furthermore, doesn't the method suffer from sample inefficiency for large values for $\\beta$, i.e., when lots of data from the buffer is discarded?\n5) Scalability: Its unclear to me whether the proposed method would scale reasonably well to more high-dimensional settings such as POMDPs/visual-control tasks (note that 5, 6 also work for POMDPs). Could the authors elaborate further on this?\n\nI am happy to raise my score if my concerns above are addressed. \n\n1. https://arxiv.org/abs/2109.00157\n2. https://www.sciencedirect.com/science/article/pii/S1566253522000288?casa_token=ScYOIGv6D2wAAAAA:buNFoXMZLqPiWzo0CLpe3K-ac_nxundN5855FT0QwSnE6jhpm6VwPFS0UHyt1E9WXJePruqZsg\n3. https://www.mdpi.com/1099-4300/25/2/327\n4. https://arxiv.org/pdf/1906.04161\n5. https://arxiv.org/abs/2005.05960\n6. https://arxiv.org/abs/2110.09514\n7. https://arxiv.org/pdf/2006.10277\n8. https://arxiv.org/pdf/2306.12371"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This study presents a new exploratory algorithm in Reinforcement Learning, where the agent explores by moving away from its past experiences using two distinct approaches."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024exploration,\ntitle={Exploration by Running Away from the Past},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zyGrziIVdE},\nnote={under review}\n}"
},
"abstract": {
"value": "The ability to explore efficiently and effectively is a central challenge of reinforcement learning.\nIn this work, we consider exploration through the lens of information theory.\nSpecifically, we cast exploration as a problem of maximizing the Shannon entropy of the state occupation measure.\nThis is done by maximizing a sequence of divergences between distributions representing an agent's past behavior and its current behavior.\nIntuitively, this encourages the agent to explore new behaviors that are distinct from past behaviors.\nHence, we call our method RAMP, for ``$\\textbf{R}$unning $\\textbf{A}$way fro$\\textbf{m}$ the $\\textbf{P}$ast.''\nA fundamental question of this method is the quantification of the distribution change over time.\nWe consider both the Kullback-Leibler divergence and the Wasserstein distance to quantify divergence between successive state occupation measures, and explain why the former might lead to undesirable exploratory behaviors in some tasks. \nWe demonstrate that by encouraging the agent to explore by actively distancing itself from past experiences, it can effectively explore mazes and a wide range of behaviors on robotic manipulation and locomotion tasks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Reinforcement Learning",
"Exploration",
"Deep Learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/71878206b1505f94ea99d96ce82c065b21bba0ce.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Exploration by Running Away from the Past"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zz9jAssrwL | Bayesian Policy Distillation via Offline RL for Lightweight and Fast Inference | main | Withdraw | neural network compression;reinforcement learning;robot learning | reinforcement learning | Jangwon Kim;Yoonsu Jang;Jonghyeok Park;Yoonhee Gil;Soohee Han | ~Jangwon_Kim2;~Yoonsu_Jang1;~Jonghyeok_Park3;~Yoonhee_Gil1;~Soohee_Han1 | 3;3;6 | 4;3;3 | 2;3;4 | 2;2;3 | 2;2;2 | 4 | 3.333333 | 3 | 2.333333 | 2 | -0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "On behalf of my co-authors, I sincerely thank the reviewers for their thoughtful feedback and would like to withdraw our paper to avoid wasting their valuable time."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": {
"value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors."
}
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "n/a"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The introduction is written very well\n- The experiments are carried out in a rigorous fashion, considering different environments and repetitions.\n- Section 2 introduces the relevant concepts for the following derivations"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors present a method for batch constrained offline RL, from a \"distillation\" perspective."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Significance:** \n1. I find the “distillation” perspective here not very fitting. Essentially you are doing batch constrained RL. Because next to the behavior cloning you are also doing optimization (Eq. (10,11)). From this it follows that the assumption is that the behavior policy (your teacher) is not optimal.\nSo the behavior policy may or may not perform very well. Then you optimize a policy, and add a constraint that it should not be too different from the behavior policy (because of safety, or risk of OOD). You now make this optimized policy sparse to achieve greater robustness (on top of the Bayesian modeling approach). The distillation approach only conceptually makes sense if the behavior policy is already good or near-optimal—otherwise, why distill imperfect information? But in that case, one would not need to do Eq. (10,11) and could just approximate the behavior policy directly using Eq. (8-9). \n\n2. Usually the biggest memory/scale requirements (and thus need to distill things) in RL do not come from the actor (the policy) but the critic (the Q-value in your case). Modeling \"what to do at which state\" (policy) is usually not scale-bounded compared to modeling \"why to do what at which state\" (Q/Value function, transition model). \n\n**Clarity & Writing:** \nThe writing in Section 3 needs significant improvement and lacks clarity.\n\n**3.1** \n\n\n1. The term “student policy is very confusing in this context and was not introduced. In Eq. (8) the policy is once referred to as “student policy” (“Then, the student policy is trained by solving the following minimization problem”) and then as “target policy” (“where \\pi_w is the target policy parameterized by w”).\n\nSo \\pi_w is both a “student” as well as the “target” policy. Then the authors say: “and is trained using BC. However, when the student policy size is excessively small, maintaining the performance of the teacher policy becomes challenging (Rusu et al., 2015).” Now there is also a “teacher” policy, which does not follow at all from BC introduction earlier. Because \\mathcal{D is the dataset, the initial batch, generated by “the behavior policy” and the minimization of Eq. (8) is referred to as the student policy, I can only assume that the behavior policy is the “teacher”.\n\n2. In Eq. (9) you are doing a ELBO minimization of (8) using a VI. Then you say “L_{RL-Elbo) in (9) includes a term for BC as in (8)”. This is again very confusing. EQ. (9) IS the BC term when modeled in a Baysian/VI way. The KL term follows from the Bayesian modeling perspective.\n\n**3.2** \n*“Therefore, learning a general behavior that makes good actions for states not included in the static dataset is considered when training the Q-function.”*\n\nA behavior policy is not learned or optimized in offline RL, as it is fixed due to generating the batch. Its specifics (and whether it can even be approximated) are unknown. While one can approximate the behavior policy, in my understanding, that is not your approach. You are performing batch-constrained optimization to learn a policy (Eq. (10-11) that also remains close to the data.\n\n*““a’ can be viewed as a combination of the action from the deterministic policy \\pi_w and a random perturbation, where \\overline{\\pi} ) represents the mean value.”*\n\nThis is misleading. You are using a Bayesian neural network (BNN)-based approach. The distribution over the actions originates from the epistemic uncertainty over the parameters, not from a “stochastic policy” or “random perturbation.” You sample from a posterior over network parameters, representing the belief about which policy is correct."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. In equation (10), is R(s,a) obtained from interacting with the environment? As far as I know, in offline RL, the reward should come from offline data. \n2. In the experiments of Table 1, why is the student network size set separately for the TD3+BC method? Additionally, is it reasonable to have the same network size for the Expert and Medium settings in this method? Does the significantly lower Return of the Expert in the Ant task result from this setting?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The proposed method creates a lightweight and sparse offline policy suitable for situations with limited computational performance and expensive data collection costs. \n2. Experiments were conducted in both simulated and real environments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a Bayesian Policy Distillation (BPD) offline policy compression method that retrains a compact student policy network from a larger teacher network. Experimental results reveal that the proposed BPD successfully compresses the policy networks, making them lighter and achieving faster inference time."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The framework diagram in Figure 1 is too simplistic. It cannot explain how BPD is applied to the learning of the student network. \n2. The symbol \"p\" denotes different meanings in this paper. In Section 2.1, \"p\" represents the transition probability function, while in Section 2.2, \"p\" represents the posterior. \n3. In Table 1, the names \"Ant-v3\" and \"Ant\" are not consistent. \n4. From the experimental results presented in Table 1, the Return and Sparsity are not optimal in most tasks. \n5. Sections 4.1 and 4.2 refer to \"Table 4,\" but there is no \"Table 4\" in the paper; please check if it should be \"Table 1.\" \n6. The second experiment in Section 4.3 lacks persuasiveness for the paper's argument, as the student network compresses the model by learning from the teacher network, while the experiment learns a small network without any teacher guidance. This does not provide sufficient evidence for performance degradation in the student network's compression. Comparisons should be made between different sizes of student networks under the guidance of a teacher network to illustrate this. \n7. The order of Figure 4 and Figure 5 should be consistent with the order in which they are referenced in the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Can you make the same ablation study as Fig. 5 with increasing network size for TD3+BC? How do the curves compare?\n- How should you choose the size of the policy network trained with BPD? \n- In Sec 4.4, you compare a teacher policy with a distilled policy learned with BPD on a real world pendulum. Given that TD3+BC (using a small network) was competitive with BPD in several environments, I am really curious to know how a small network trained with TD3+BC (at a similar sparsity level to BPD) would perform in this real-world inverted pendulum environment, and I think such a result would be helpful in demonstrating further the benefits of BPD compared to TD3+BC. Would it be possible to perform this experiment?\n- You set C to 2 (L429) while indicating that the literature “widely use[s]” 3 (L183). How did you decide on this number?\n- TD3+BC (with a small policy network) seems to be a very strong baseline (according to Table 1). Do you anticipate instances where BPD would be much more performant than TD3+BC?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "There are several things to like about this submission. \n- First of all, I really like the addition of the proposed method as the modification of an already existing component. The paper leverages the presence of a behavior cloning loss in offline RL algorithms (such as TD3+BC), by using this loss to make the student network amenable to sparsification. This essentially extends the usual offline algorithms that incorporate a BC loss to be compressed simply by using a variational training approach for the student rather than a standard supervised learning one. I find this is a smart solution since it uses an element that was already there (the BC loss) and transforms it to obtain a new property (a controllable sparsity of the student).\n- Figure 5 shows that the sparsification of the student network is very well behaved: as we lower the threshold value C, the sparsity (according to the definition of L358) monotonously “decreases” (except for Hopper-v3-medium) with the performance. This lets the experimentator decide what trade-off to strike between performance and sparsity.\n- The paper includes a real-world experiment to demonstrate that the proposed method is indeed quicker to run than the complex teacher. This is particularly commendable as it demonstrates the good sparsity (through the inference speed) and performance of the proposed method in the concrete real-world setting that it is supposed to be tailored towards. Given that offline RL algorithms are particularly interesting for real-world applications, this is a very welcome level of evidence. \n- Finally, I found the paper well structured (with, however, the notable absence of a related works section). The authors made an effort in slowly building their approach (perhaps even too much in Sec. 2!) and its rationale. Moreover, the text is well written. The equations are not overdone, and useful to illustrate the progressions of the text. The ideas were, overall, exposed very clearly."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses the task of learning compressed DRL policies in an offline learning setting. The proposed method leverages a Bayesian training technique to simultaneously (1) mitigate the issues of offline training thanks to a behavior cloning regularization and (2) allow for a sparsification of the network thanks to a well chosen prior on the network weights.\nThe method is shown to perform favorably compared to methods coming from the supervised learning compression setting and a classic offline RL technique."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "At the same time, the paper could be improved on several aspects.\n\n- First, I find that despite the clarity of most of the paper, the proposed approach was not sufficiently well placed in its environment. For instance, the paper uses a module usually intended to prevent instability due to OOD actions (the BC loss) and leverages it to a completely different effect (the compression of the policy). The idea is not problematic in itself (on the contrary), but I would have appreciated a discussion of the “twist” that this paper used regarding the BC loss compared to similar algorithms, maybe at the beginning of Section 3, to make the transition from the usual usage of the BC loss towards the new one. This aspect was exacerbated by the absence of an explicit related works section clearly comparing the proposed method and the existing ones, and a contributions section explicitly delineating the improvements brought by the BPD algorithm. Finally, unless I am mistaken, I find the method very close to TD3+BC, including the way to set the hyperparameters (Sec 3.4), obviously with the exception of the training and form of the student policy. If this is indeed the case, I think a clearer comparison with TD3+BC would be important to add. It should be made clear that TD3+BC is not intended for compression (TD3+BC), even though the naming of the metric “sparsity” (L358) can make this point confusing when looking at the results in Table 1.\n- I find the concrete performance improvements of the proposed approach to be relatively modest. Namely, at a comparable sparsity level, BPD has a rather comparable performance to TD3+BC (Table 1). Without an inherent sparsity mechanism, a small network trained with TD3+BC leads to a policy with roughly as many active parameters as the one of BPD, for a marked performance loss in only 3 environments out of 8. Given that TD3+BC seems to be the method BPD is based off, where the number of parameters is controlled by the size of the student network rather than a sparsity inducing prior, I would have expected it to perform significantly worse than BPD (while in practice TD3+BC gets a higher mean score in half the environments). \n- Regarding the reporting of the performance, selecting the 2 highest performances seems like an arbitrary choice. I would find it more fair to select the methods that perform the best, including the ones for which the confidence interval intersect. Alternatively, the reliable library [1,2] provides great tools to compare different algorithms (I do not expect the authors to re-illustrate the results with rliable since other works in offline RL have adopted this Table format, eg [3]).\n- I found the naming of the “sparsity” metric (L358) pretty confusing since it involves both the teacher network and the student network, such that a method that leads to a dense network and without a sparsity mechanism (TD3+BC) leads to a network with a very high “sparsity” metric. Maybe a naming scheme that makes it clear that the number is computer w.r.t the teacher (for instance, \"teacher compression ratio\" -- this is just a suggestion, others might be better) would be helpful.\n- I found that unlike most of the paper, the paragraph at lines 42-62 was overall pretty confusing (especially L56-62). L49-51, online algorithms not sufficiently tuned are indicated to perform poorly because they interact with the environment: why is that a problem? I find the argument presented at L69 about offline training much more compelling. The meaning of the sentence at L56-57 was also vague to me. At L58-59: “aforementioned drawbacks”: I struggled to find the exact drawbacks that were referred to.\n\n_Minor remarks_:\n- Could you please indicate what the +/- mean exactly (what uncertainty measure) in Table 1?\n- In Sec. 4, the student policies are chosen to be a smaller size than the teacher network. I am not sure why this is the case and how this size was chosen, since Sec 3.1 indicates that the student network size does not need to be set a priori (L226).\n- I found Fig. 1 to not be very informative. As a personal opinion, I would have found a diagram with the different elements in the loss and their effects more useful.\n- L374-375: “increase their zero-weights to the maximum”: I did not understand this sentence, could you please rephrase it?\n- In Sec 4.4, you could precise that Appendix B contains the information about the computation of the score.\n- L55: missing dash in Kullback-Leibler\n- L464: Table 4 does not exist, you likely meant Table 1?\n\n\n[1] https://agarwl.github.io/rliable/, \n[2] Agarwal, Rishabh, et al. \"Deep reinforcement learning at the edge of the statistical precipice.\" Advances in neural information processing systems 34 (2021): 29304-29320.\n[3] Yang, Rui, et al. \"Towards robust offline reinforcement learning under diverse data corruption.\" arXiv preprint arXiv:2310.12955 (2023)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@misc{\nkim2024bayesian,\ntitle={Bayesian Policy Distillation via Offline {RL} for Lightweight and Fast Inference},\nauthor={Jangwon Kim and Yoonsu Jang and Jonghyeok Park and Yoonhee Gil and Soohee Han},\nyear={2024},\nurl={https://openreview.net/forum?id=zz9jAssrwL}\n}"
},
"abstract": {
"value": "High-performance deep reinforcement learning faces tremendous challenges when implemented on cost-effective low-end embedded systems due to its heavy computational burden. To address this issue, we propose a policy distillation method called Bayesian Policy Distillation (BPD), which effectively retrains small-sized neural networks through an offline reinforcement learning approach. BPD exploits Bayesian neural networks to distill already designed high-performance policy networks by adopting value optimizing, behavior cloning, and sparsity-inducing strategies. Simulation results reveal that the proposed BPD successfully compresses the policy networks, making them lighter and achieving faster inference time. Furthermore, the proposed approach is demonstrated with a real inverted pendulum system and reduced the inference time and memory size by 78 \\% and 98 \\%, respectively."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": {
"value": [
"~Jangwon_Kim2",
"~Yoonsu_Jang1",
"~Jonghyeok_Park3",
"~Yoonhee_Gil1",
"~Soohee_Han1"
]
},
"authors": {
"value": [
"Jangwon Kim",
"Yoonsu Jang",
"Jonghyeok Park",
"Yoonhee Gil",
"Soohee Han"
]
},
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"neural network compression",
"reinforcement learning",
"robot learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": {
"value": "kim|bayesian_policy_distillation_via_offline_rl_for_lightweight_and_fast_inference"
},
"pdf": {
"value": "/pdf/d5185e8fd31fa87250c038ae66288a8683162a9a.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/2bd042adbb2e5f8939950ed6dab944952fd9fe29.zip"
},
"title": {
"value": "Bayesian Policy Distillation via Offline RL for Lightweight and Fast Inference"
},
"venue": {
"value": "ICLR 2025 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Withdrawn_Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||
zzR1Uskhj0 | High Probability Bounds for Cross-Learning Contextual Bandits with Unknown Context Distributions | main | Active | contextual bandits;cross-learning;high-probability bounds | learning theory | 3;5;6;6;8 | 3;3;4;3;3 | 2;3;3;4;4 | 2;3;3;3;3 | 1;3;3;3;3 | 5.6 | 3.2 | 3.2 | 2.8 | 2.6 | 0.123091 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. What is the intuition behind the algorithm?\n2. How the indicator function $F_e$ resolves the unbounded martingale inequalties?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper clearly presents the challenging point with detailed technical expressions and the novelty of the analysis."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes an algorithm that achieves high probability regret bound (which is stronger than the expected regret bound) for the cross-learning contextual bandits under unknown context distribution by developing refined martingale inequalities."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The explanation is only focused on technical side without the explanation of the algorithm. I suggest authors to spend more time including more explanations and organizing the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "While the result is good, I am uncertain about its significance because neither the problem nor the algorithm proposed are new. There are no extensions of this result, no experiments. I am not sure if this is standalone result is significant enough to be published at a premier conference."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper proposes a new look at an existing problem and provides a completely novel analysis in their work. The ideas and techniques proposed are completely new and can be of independent interest. This is particularly true for the martingale concentration result."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses the challenge of achieving high-probability regret bounds in the adversarial contextual bandit framework, where the learner encounters varying contexts and must minimize cumulative loss over time. The focus is on \"cross-learning\" contextual bandits, where learners can observe losses for all possible contexts, not just the current one. \n\nResults leverage weak dependencies between epochs and refine existing martingale inequalities, by exploiting interdependencies in observations. This analysis ultimately shows that the algorithm is effective in adversarial settings, even with unknown context distributions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "See questions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "please see weaknesses"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- The paper proves a high probability lower bound which is stronger than the in expectation bound in the literature.\n- The analysis uses a nice observation that the different epochs in the algorithm are only weakly dependent which enables to prove a small bound for the cumulative bias across all epochs\n- While standard martingale inequalities cannot directly upper bound the cumulative bias, a novel technique is proposed to address this"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies cross learning in contextual adversarial linear bandits where the learner observes the losses of all contexts in each round. Recent work in Schneider et al. proposed an algorithm with a regret upper bound only in expectation. The paper studies the same algorithm and proves that the regret upper bound holds with high probability."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Can the reduction in [1] be used to map the multi-context to a single context problem? The technique is proposed for non-adversarial losses, however, the action set map from distributional to fixed should not be affected by that.\nI understand that the paper only focuses on analyzing an existing algorithm. However, a comparison with such technique in the related work is needed to justify the use of such algorithm or suggest alternative techniques to address the problem.\n\n[1] \"Contexts can be cheap: Solving stochastic contextual bandits with linear bandit algorithms.\" The Thirty Sixth Annual Conference on Learning Theory. PMLR, 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Should the definition of regret on page 3 be reversed? As in, you are subtracting the loss of the best arm with the loss of a policy, which should be a negative value (if with positive regret). This would change the decomposition of the regret on page 5 as well, but it seems nothing would affect the correctness.\n- I think the full algorithm description of the algorithm in SZ [NeurIPS’23] (or some simpler version of the description) could be shown much earlier in the paper. This would be helpful for readers who are not familiar with the previous algorithm.\n- Also, stating the main theorem in a preliminary section looks very non-standard to me. I’m not letting this affect my score, but please consider re-organizing this.\n- The meaning of 'with high probability' was never explained in the paper -- as in, it could mean with probability $1-1/K$ or with probability $0.99$. I think your bound gives the former, and this should be stated explicitly."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "In general, my opinion of this paper is positive. The paper appears to require a great deal of background to be able to parse. Despite this, I believe the paper did reasonably well in terms of explaining the existing work and its techniques. Getting high probability bounds in adversarial bandits usually requires some neat observations and technical steps. Although I’m not able to follow all the steps in the short time frame, I do think the paper contains some nice technical observations and ideas."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studied adversarial context bandits in a special setting where the losses of arm $a_i$ could be observed under all contexts when the algorithm plays arm $a_i$. The goal, like in classical adversarial bandit problems, is to minimize the regret compared to the loss of the best arm in hindsight. The paper focuses on the setting where the loss sequence is adversarial and the context is stochastic with an unknown distribution. A recent work of SZ [NeurIPS’23] designed an algorithm with *expected* regret of $\\tilde{O}(\\sqrt{KT})$ in this setting, where $K$ is the number of arms. This paper conducted a renewed analysis of the algorithm in SZ [NeurIPS’23], and the main result is that the algorithm could actually achieve $\\tilde{O}(\\sqrt{KT})$ with high probability.\n\nThe main technique of the paper is heavily influenced by the previous work of SZ [NeurIPS’23]. In a nutshell, the low-regret guarantee of the algorithm crucially relies on the concentration of unbiased estimation of $E_{c}[\\ell_{t,c}(a)]$. Here, we cannot exactly compute the quantity since the distribution of the context is unknown. The key idea of SZ [NeurIPS’23] is to commit two steps for each EXP3 step and use one of them to estimate the distribution of the context. On top of that, this paper further utilized the weak dependency between epochs, and derived a martingale argument to get high-probability regret."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Although I'm mostly supportive, I think I couldn’t strongly champion the paper due to the following reasons:\n- The scope of contribution: although the paper does contain some neat technical observations, the contribution appears to be somehow incremental. After all, this is a new analysis of an existing algorithm, and the new analysis is not something that improves the previous bound (but instead is to get a high-probability bound). Again, I do acknowledge that such contributions are non-trivial. However, I do not think it’s enough for me to champion the paper.\n- If the paper is going to be mainly accepted due to the techniques: then, I do not think the paper contains a substantial amount of new ideas. I appreciate the technical observations, and I agree that the steps are non-trivial. However, if the conceptual message is not as strong, and the merits of the paper mainly lie in the techniques, then the bar would inevitably be higher. \n- For a conference like ICLR, the lack of experiments could be an issue. I am *not* letting this affect my score since I often advocate learning theory papers. However, I do want to raise this point since it is common for ML conferences to ask for experiments."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- (1) Is the concept class $C$ a finite set? If so, what is the reason for assuming a finite concept class $C$? Practically speaking, the contextual information would be like a vector in a compact set, as it is very unlikely to see two identical users.\n- (2) Where is the variable in Theorem 1 that characterizes the property of $C$? How does this variable appear in the bound proved in this submission?\n- (3) Could you please provide more evidence or further discussion of the applicability of the technique developed here so that we can better appreciate its potential?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The submission points out the difficulty that prevents the previous work from achieving a bound with high probability (lines 167–176).\n- Identify the weak dependency between epochs (line 386).\n- Devise a new technique to solve the unbounded issue induced by the weak dependency (the treatment for the Bias5e them in lines 395–411)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The submission studies contextual bandits with cross learning. Previously, the existing regret bound held in expectation. The submission refines the regret analysis so that the regret bound holds with high probability. The main contribution is to show how the weak dependency structure can be exploited to solve a concentration difficulty in the previous analysis."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- (1) Section 3.2 contains several subtopics, such as the regret decomposition, the discussion of each decomposed term, and the analysis strategy for the challenging term. A better editorial layout would improve the readability.\n- (2) The sentence “Notably, …” in line 111 is confusing. It seems unrealistic to be able to observe the loss for every context $c$. It also does not match the algorithm’s (Algorithm 1) behavior."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We give a nearly optimal high probability bound for the cross-learning contextual bandits with unknown context distributions."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024high,\ntitle={High Probability Bounds for Cross-Learning Contextual Bandits with Unknown Context Distributions},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zzR1Uskhj0},\nnote={under review}\n}"
},
"abstract": {
"value": "Motivated by applications in online bidding and sleeping bandits, we examine the problem of contextual bandits with cross learning, where the learner observes the loss associated with the action across all possible contexts, not just the current round’s context. Our focus is on a setting where losses are chosen adversarially, and contexts are sampled i.i.d. from a specific distribution. This problem was first studied by Balseiro et al. (2019), who proposed an algorithm that achieves near-optimal regret under the assumption that the context distribution is known in advance. However, this assumption is often unrealistic. To address this issue, Schneider & Zimmert (2023) recently proposed a new algorithm that achieves nearly optimal expected regret. It is well-known that expected regret can be significantly weaker than high-probability bounds. In this paper, we present a novel, in-depth analysis of their algorithm and demonstrate that it actually achieves near-optimal regret with $\\textit{high probability}$. There are steps in the original analysis by Schneider & Zimmert (2023) that lead only to an expected bound by nature. In our analysis, we introduce several new insights. Specifically, we make extensive use of the weak dependency structure between different epochs, which was overlooked in previous analyses. Additionally, standard martingale inequalities are not directly applicable, so we refine martingale inequalities to complete our analysis."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"contextual bandits",
"cross-learning",
"high-probability bounds"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/668f80e4183cd90e61a74d9534032aa7473a0bd7.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning theory"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "High Probability Bounds for Cross-Learning Contextual Bandits with Unknown Context Distributions"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |